00:00:00.001 Started by upstream project "autotest-per-patch" build number 132048 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.152 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.153 The recommended git tool is: git 00:00:00.153 using credential 00000000-0000-0000-0000-000000000002 00:00:00.155 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.199 Fetching changes from the remote Git repository 00:00:00.201 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.239 Using shallow fetch with depth 1 00:00:00.239 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.239 > git --version # timeout=10 00:00:00.271 > git --version # 'git version 2.39.2' 00:00:00.271 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.288 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.288 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:11.119 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:11.130 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:11.142 Checking out Revision 71582ff3be096f9d5ed302be37c05572278bd285 (FETCH_HEAD) 00:00:11.142 > git config core.sparsecheckout # timeout=10 00:00:11.154 > git read-tree -mu HEAD # timeout=10 00:00:11.171 > git checkout -f 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=5 00:00:11.188 Commit message: "jenkins/jjb-config: Add SPDK_TEST_NVME_INTERRUPT to nvme-phy job" 00:00:11.188 > git rev-list --no-walk 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=10 00:00:11.277 [Pipeline] Start of Pipeline 00:00:11.292 [Pipeline] library 00:00:11.293 Loading library shm_lib@master 00:00:11.293 Library shm_lib@master is cached. Copying from home. 00:00:11.310 [Pipeline] node 00:00:11.320 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:11.322 [Pipeline] { 00:00:11.331 [Pipeline] catchError 00:00:11.333 [Pipeline] { 00:00:11.346 [Pipeline] wrap 00:00:11.355 [Pipeline] { 00:00:11.363 [Pipeline] stage 00:00:11.365 [Pipeline] { (Prologue) 00:00:11.384 [Pipeline] echo 00:00:11.386 Node: VM-host-WFP1 00:00:11.393 [Pipeline] cleanWs 00:00:11.403 [WS-CLEANUP] Deleting project workspace... 00:00:11.403 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.409 [WS-CLEANUP] done 00:00:11.606 [Pipeline] setCustomBuildProperty 00:00:11.676 [Pipeline] httpRequest 00:00:12.085 [Pipeline] echo 00:00:12.087 Sorcerer 10.211.164.101 is alive 00:00:12.094 [Pipeline] retry 00:00:12.096 [Pipeline] { 00:00:12.106 [Pipeline] httpRequest 00:00:12.110 HttpMethod: GET 00:00:12.110 URL: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:12.111 Sending request to url: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:12.127 Response Code: HTTP/1.1 200 OK 00:00:12.127 Success: Status code 200 is in the accepted range: 200,404 00:00:12.128 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:17.478 [Pipeline] } 00:00:17.496 [Pipeline] // retry 00:00:17.504 [Pipeline] sh 00:00:17.836 + tar --no-same-owner -xf jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:17.853 [Pipeline] httpRequest 00:00:18.242 [Pipeline] echo 00:00:18.243 Sorcerer 10.211.164.101 is alive 00:00:18.253 [Pipeline] retry 00:00:18.255 [Pipeline] { 00:00:18.270 [Pipeline] httpRequest 00:00:18.275 HttpMethod: GET 00:00:18.275 URL: http://10.211.164.101/packages/spdk_a46541aa1c4cc20e5e126b9ffa47f495be8cb3e0.tar.gz 00:00:18.276 Sending request to url: http://10.211.164.101/packages/spdk_a46541aa1c4cc20e5e126b9ffa47f495be8cb3e0.tar.gz 00:00:18.297 Response Code: HTTP/1.1 200 OK 00:00:18.298 Success: Status code 200 is in the accepted range: 200,404 00:00:18.298 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_a46541aa1c4cc20e5e126b9ffa47f495be8cb3e0.tar.gz 00:01:20.697 [Pipeline] } 00:01:20.715 [Pipeline] // retry 00:01:20.723 [Pipeline] sh 00:01:21.007 + tar --no-same-owner -xf spdk_a46541aa1c4cc20e5e126b9ffa47f495be8cb3e0.tar.gz 00:01:23.551 [Pipeline] sh 00:01:23.834 + git -C spdk log --oneline -n5 00:01:23.835 a46541aa1 nvme/rdma: Allocate memory domain in rdma provider 00:01:23.835 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:01:23.835 1a1586409 nvmf: use bdev's nsid for admin command passthru 00:01:23.835 892c29f49 nvmf: pass nsid to nvmf_ctrlr_identify_ns() 00:01:23.835 fb6c49f2f bdev: add spdk_bdev_get_nvme_nsid() 00:01:23.853 [Pipeline] writeFile 00:01:23.867 [Pipeline] sh 00:01:24.150 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:24.162 [Pipeline] sh 00:01:24.447 + cat autorun-spdk.conf 00:01:24.447 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.447 SPDK_TEST_NVME=1 00:01:24.447 SPDK_TEST_FTL=1 00:01:24.447 SPDK_TEST_ISAL=1 00:01:24.447 SPDK_RUN_ASAN=1 00:01:24.447 SPDK_RUN_UBSAN=1 00:01:24.447 SPDK_TEST_XNVME=1 00:01:24.447 SPDK_TEST_NVME_FDP=1 00:01:24.447 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.454 RUN_NIGHTLY=0 00:01:24.456 [Pipeline] } 00:01:24.470 [Pipeline] // stage 00:01:24.485 [Pipeline] stage 00:01:24.487 [Pipeline] { (Run VM) 00:01:24.500 [Pipeline] sh 00:01:24.785 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:24.785 + echo 'Start stage prepare_nvme.sh' 00:01:24.785 Start stage prepare_nvme.sh 00:01:24.785 + [[ -n 2 ]] 00:01:24.785 + disk_prefix=ex2 00:01:24.785 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:24.785 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:24.785 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:24.785 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.785 ++ SPDK_TEST_NVME=1 00:01:24.785 ++ SPDK_TEST_FTL=1 00:01:24.785 ++ SPDK_TEST_ISAL=1 00:01:24.785 ++ SPDK_RUN_ASAN=1 00:01:24.785 ++ SPDK_RUN_UBSAN=1 00:01:24.785 ++ SPDK_TEST_XNVME=1 00:01:24.785 ++ SPDK_TEST_NVME_FDP=1 00:01:24.785 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.785 ++ RUN_NIGHTLY=0 00:01:24.785 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:24.785 + nvme_files=() 00:01:24.785 + declare -A nvme_files 00:01:24.785 + backend_dir=/var/lib/libvirt/images/backends 00:01:24.785 + nvme_files['nvme.img']=5G 00:01:24.785 + nvme_files['nvme-cmb.img']=5G 00:01:24.785 + nvme_files['nvme-multi0.img']=4G 00:01:24.785 + nvme_files['nvme-multi1.img']=4G 00:01:24.785 + nvme_files['nvme-multi2.img']=4G 00:01:24.785 + nvme_files['nvme-openstack.img']=8G 00:01:24.785 + nvme_files['nvme-zns.img']=5G 00:01:24.785 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:24.785 + (( SPDK_TEST_FTL == 1 )) 00:01:24.785 + nvme_files["nvme-ftl.img"]=6G 00:01:24.785 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:24.785 + nvme_files["nvme-fdp.img"]=1G 00:01:24.785 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:24.785 + for nvme in "${!nvme_files[@]}" 00:01:24.785 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:24.785 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:24.785 + for nvme in "${!nvme_files[@]}" 00:01:24.785 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:24.785 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:24.785 + for nvme in "${!nvme_files[@]}" 00:01:24.785 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:24.785 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:24.785 + for nvme in "${!nvme_files[@]}" 00:01:24.785 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:25.044 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:25.044 + for nvme in "${!nvme_files[@]}" 00:01:25.044 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:25.044 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.044 + for nvme in "${!nvme_files[@]}" 00:01:25.044 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:25.044 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.044 + for nvme in "${!nvme_files[@]}" 00:01:25.045 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:25.045 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.045 + for nvme in "${!nvme_files[@]}" 00:01:25.045 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:25.045 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:25.045 + for nvme in "${!nvme_files[@]}" 00:01:25.045 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:25.308 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.308 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:25.308 + echo 'End stage prepare_nvme.sh' 00:01:25.308 End stage prepare_nvme.sh 00:01:25.349 [Pipeline] sh 00:01:25.634 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:25.634 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:25.634 00:01:25.634 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:25.634 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:25.634 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:25.634 HELP=0 00:01:25.634 DRY_RUN=0 00:01:25.634 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:25.634 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:25.634 NVME_AUTO_CREATE=0 00:01:25.634 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:25.634 NVME_CMB=,,,, 00:01:25.634 NVME_PMR=,,,, 00:01:25.634 NVME_ZNS=,,,, 00:01:25.634 NVME_MS=true,,,, 00:01:25.634 NVME_FDP=,,,on, 00:01:25.634 SPDK_VAGRANT_DISTRO=fedora39 00:01:25.634 SPDK_VAGRANT_VMCPU=10 00:01:25.634 SPDK_VAGRANT_VMRAM=12288 00:01:25.634 SPDK_VAGRANT_PROVIDER=libvirt 00:01:25.634 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:25.634 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:25.634 SPDK_OPENSTACK_NETWORK=0 00:01:25.634 VAGRANT_PACKAGE_BOX=0 00:01:25.634 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:25.634 FORCE_DISTRO=true 00:01:25.634 VAGRANT_BOX_VERSION= 00:01:25.634 EXTRA_VAGRANTFILES= 00:01:25.634 NIC_MODEL=e1000 00:01:25.634 00:01:25.634 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:25.634 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:28.170 Bringing machine 'default' up with 'libvirt' provider... 00:01:29.107 ==> default: Creating image (snapshot of base box volume). 00:01:29.366 ==> default: Creating domain with the following settings... 00:01:29.366 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730776312_e007836a7ae2155cc784 00:01:29.366 ==> default: -- Domain type: kvm 00:01:29.366 ==> default: -- Cpus: 10 00:01:29.366 ==> default: -- Feature: acpi 00:01:29.366 ==> default: -- Feature: apic 00:01:29.366 ==> default: -- Feature: pae 00:01:29.366 ==> default: -- Memory: 12288M 00:01:29.366 ==> default: -- Memory Backing: hugepages: 00:01:29.366 ==> default: -- Management MAC: 00:01:29.366 ==> default: -- Loader: 00:01:29.366 ==> default: -- Nvram: 00:01:29.366 ==> default: -- Base box: spdk/fedora39 00:01:29.366 ==> default: -- Storage pool: default 00:01:29.366 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730776312_e007836a7ae2155cc784.img (20G) 00:01:29.366 ==> default: -- Volume Cache: default 00:01:29.366 ==> default: -- Kernel: 00:01:29.366 ==> default: -- Initrd: 00:01:29.366 ==> default: -- Graphics Type: vnc 00:01:29.366 ==> default: -- Graphics Port: -1 00:01:29.366 ==> default: -- Graphics IP: 127.0.0.1 00:01:29.366 ==> default: -- Graphics Password: Not defined 00:01:29.366 ==> default: -- Video Type: cirrus 00:01:29.366 ==> default: -- Video VRAM: 9216 00:01:29.366 ==> default: -- Sound Type: 00:01:29.366 ==> default: -- Keymap: en-us 00:01:29.366 ==> default: -- TPM Path: 00:01:29.366 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:29.366 ==> default: -- Command line args: 00:01:29.366 ==> default: -> value=-device, 00:01:29.366 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:29.366 ==> default: -> value=-drive, 00:01:29.366 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:29.366 ==> default: -> value=-device, 00:01:29.366 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:29.366 ==> default: -> value=-device, 00:01:29.366 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:29.366 ==> default: -> value=-drive, 00:01:29.366 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:29.366 ==> default: -> value=-device, 00:01:29.366 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.366 ==> default: -> value=-device, 00:01:29.366 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:29.366 ==> default: -> value=-drive, 00:01:29.366 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:29.366 ==> default: -> value=-device, 00:01:29.366 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.366 ==> default: -> value=-drive, 00:01:29.366 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:29.366 ==> default: -> value=-device, 00:01:29.366 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.366 ==> default: -> value=-drive, 00:01:29.366 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:29.366 ==> default: -> value=-device, 00:01:29.366 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.367 ==> default: -> value=-device, 00:01:29.367 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:29.367 ==> default: -> value=-device, 00:01:29.367 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:29.367 ==> default: -> value=-drive, 00:01:29.367 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:29.367 ==> default: -> value=-device, 00:01:29.367 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.665 ==> default: Creating shared folders metadata... 00:01:29.665 ==> default: Starting domain. 00:01:31.572 ==> default: Waiting for domain to get an IP address... 00:01:49.690 ==> default: Waiting for SSH to become available... 00:01:49.690 ==> default: Configuring and enabling network interfaces... 00:01:53.877 default: SSH address: 192.168.121.250:22 00:01:53.877 default: SSH username: vagrant 00:01:53.877 default: SSH auth method: private key 00:01:56.426 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:06.430 ==> default: Mounting SSHFS shared folder... 00:02:07.368 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:07.368 ==> default: Checking Mount.. 00:02:08.767 ==> default: Folder Successfully Mounted! 00:02:08.767 ==> default: Running provisioner: file... 00:02:10.143 default: ~/.gitconfig => .gitconfig 00:02:10.401 00:02:10.401 SUCCESS! 00:02:10.401 00:02:10.401 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:10.401 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:10.401 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:10.401 00:02:10.410 [Pipeline] } 00:02:10.424 [Pipeline] // stage 00:02:10.433 [Pipeline] dir 00:02:10.434 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:10.436 [Pipeline] { 00:02:10.448 [Pipeline] catchError 00:02:10.450 [Pipeline] { 00:02:10.463 [Pipeline] sh 00:02:10.745 + vagrant ssh-config --host vagrant 00:02:10.745 + sed -ne /^Host/,$p 00:02:10.745 + tee ssh_conf 00:02:14.033 Host vagrant 00:02:14.033 HostName 192.168.121.250 00:02:14.033 User vagrant 00:02:14.033 Port 22 00:02:14.033 UserKnownHostsFile /dev/null 00:02:14.033 StrictHostKeyChecking no 00:02:14.033 PasswordAuthentication no 00:02:14.033 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:14.033 IdentitiesOnly yes 00:02:14.033 LogLevel FATAL 00:02:14.033 ForwardAgent yes 00:02:14.033 ForwardX11 yes 00:02:14.033 00:02:14.047 [Pipeline] withEnv 00:02:14.050 [Pipeline] { 00:02:14.063 [Pipeline] sh 00:02:14.345 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:14.345 source /etc/os-release 00:02:14.345 [[ -e /image.version ]] && img=$(< /image.version) 00:02:14.345 # Minimal, systemd-like check. 00:02:14.345 if [[ -e /.dockerenv ]]; then 00:02:14.345 # Clear garbage from the node's name: 00:02:14.345 # agt-er_autotest_547-896 -> autotest_547-896 00:02:14.345 # $HOSTNAME is the actual container id 00:02:14.345 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:14.345 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:14.345 # We can assume this is a mount from a host where container is running, 00:02:14.345 # so fetch its hostname to easily identify the target swarm worker. 00:02:14.345 container="$(< /etc/hostname) ($agent)" 00:02:14.345 else 00:02:14.345 # Fallback 00:02:14.345 container=$agent 00:02:14.345 fi 00:02:14.345 fi 00:02:14.345 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:14.345 00:02:14.616 [Pipeline] } 00:02:14.632 [Pipeline] // withEnv 00:02:14.640 [Pipeline] setCustomBuildProperty 00:02:14.654 [Pipeline] stage 00:02:14.656 [Pipeline] { (Tests) 00:02:14.672 [Pipeline] sh 00:02:14.953 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:15.225 [Pipeline] sh 00:02:15.508 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:15.813 [Pipeline] timeout 00:02:15.814 Timeout set to expire in 50 min 00:02:15.816 [Pipeline] { 00:02:15.831 [Pipeline] sh 00:02:16.113 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:16.680 HEAD is now at a46541aa1 nvme/rdma: Allocate memory domain in rdma provider 00:02:16.692 [Pipeline] sh 00:02:16.970 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:17.241 [Pipeline] sh 00:02:17.520 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:17.796 [Pipeline] sh 00:02:18.100 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:18.368 ++ readlink -f spdk_repo 00:02:18.368 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:18.368 + [[ -n /home/vagrant/spdk_repo ]] 00:02:18.368 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:18.368 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:18.368 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:18.368 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:18.368 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:18.368 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:18.368 + cd /home/vagrant/spdk_repo 00:02:18.368 + source /etc/os-release 00:02:18.368 ++ NAME='Fedora Linux' 00:02:18.368 ++ VERSION='39 (Cloud Edition)' 00:02:18.368 ++ ID=fedora 00:02:18.368 ++ VERSION_ID=39 00:02:18.368 ++ VERSION_CODENAME= 00:02:18.368 ++ PLATFORM_ID=platform:f39 00:02:18.368 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:18.368 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:18.368 ++ LOGO=fedora-logo-icon 00:02:18.368 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:18.368 ++ HOME_URL=https://fedoraproject.org/ 00:02:18.368 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:18.368 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:18.368 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:18.368 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:18.368 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:18.368 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:18.368 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:18.368 ++ SUPPORT_END=2024-11-12 00:02:18.368 ++ VARIANT='Cloud Edition' 00:02:18.368 ++ VARIANT_ID=cloud 00:02:18.368 + uname -a 00:02:18.368 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:18.369 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:18.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:19.196 Hugepages 00:02:19.196 node hugesize free / total 00:02:19.196 node0 1048576kB 0 / 0 00:02:19.196 node0 2048kB 0 / 0 00:02:19.196 00:02:19.196 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:19.196 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:19.196 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:19.196 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:19.196 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:02:19.196 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:19.196 + rm -f /tmp/spdk-ld-path 00:02:19.196 + source autorun-spdk.conf 00:02:19.196 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.196 ++ SPDK_TEST_NVME=1 00:02:19.196 ++ SPDK_TEST_FTL=1 00:02:19.196 ++ SPDK_TEST_ISAL=1 00:02:19.196 ++ SPDK_RUN_ASAN=1 00:02:19.196 ++ SPDK_RUN_UBSAN=1 00:02:19.196 ++ SPDK_TEST_XNVME=1 00:02:19.196 ++ SPDK_TEST_NVME_FDP=1 00:02:19.196 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.196 ++ RUN_NIGHTLY=0 00:02:19.196 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:19.196 + [[ -n '' ]] 00:02:19.196 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:19.456 + for M in /var/spdk/build-*-manifest.txt 00:02:19.456 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:19.456 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.456 + for M in /var/spdk/build-*-manifest.txt 00:02:19.456 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:19.456 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.456 + for M in /var/spdk/build-*-manifest.txt 00:02:19.456 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:19.456 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.456 ++ uname 00:02:19.456 + [[ Linux == \L\i\n\u\x ]] 00:02:19.456 + sudo dmesg -T 00:02:19.456 + sudo dmesg --clear 00:02:19.456 + dmesg_pid=5259 00:02:19.456 + [[ Fedora Linux == FreeBSD ]] 00:02:19.456 + sudo dmesg -Tw 00:02:19.456 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.456 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.456 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:19.456 + [[ -x /usr/src/fio-static/fio ]] 00:02:19.456 + export FIO_BIN=/usr/src/fio-static/fio 00:02:19.456 + FIO_BIN=/usr/src/fio-static/fio 00:02:19.456 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:19.456 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:19.456 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:19.456 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.456 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.456 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:19.456 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.456 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.456 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:19.456 03:12:43 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:19.456 03:12:43 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:19.456 03:12:43 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.456 03:12:43 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:19.456 03:12:43 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:19.456 03:12:43 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:19.456 03:12:43 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:19.456 03:12:43 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:19.456 03:12:43 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:19.456 03:12:43 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:19.456 03:12:43 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.456 03:12:43 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:19.456 03:12:43 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:19.456 03:12:43 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:19.715 03:12:43 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:19.715 03:12:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:19.715 03:12:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:19.715 03:12:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:19.715 03:12:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:19.715 03:12:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:19.715 03:12:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.715 03:12:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.715 03:12:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.715 03:12:43 -- paths/export.sh@5 -- $ export PATH 00:02:19.715 03:12:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.715 03:12:43 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:19.715 03:12:43 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:19.715 03:12:43 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730776363.XXXXXX 00:02:19.715 03:12:43 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730776363.q7VdMh 00:02:19.715 03:12:43 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:19.715 03:12:43 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:19.715 03:12:43 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:19.715 03:12:43 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:19.716 03:12:43 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:19.716 03:12:43 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:19.716 03:12:43 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:19.716 03:12:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.716 03:12:43 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:19.716 03:12:43 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:19.716 03:12:43 -- pm/common@17 -- $ local monitor 00:02:19.716 03:12:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.716 03:12:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.716 03:12:43 -- pm/common@21 -- $ date +%s 00:02:19.716 03:12:43 -- pm/common@25 -- $ sleep 1 00:02:19.716 03:12:43 -- pm/common@21 -- $ date +%s 00:02:19.716 03:12:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730776363 00:02:19.716 03:12:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730776363 00:02:19.716 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730776363_collect-cpu-load.pm.log 00:02:19.716 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730776363_collect-vmstat.pm.log 00:02:20.652 03:12:44 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:20.652 03:12:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:20.652 03:12:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:20.653 03:12:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:20.653 03:12:44 -- spdk/autobuild.sh@16 -- $ date -u 00:02:20.653 Tue Nov 5 03:12:44 AM UTC 2024 00:02:20.653 03:12:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:20.653 v25.01-pre-159-ga46541aa1 00:02:20.653 03:12:44 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:20.653 03:12:44 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:20.653 03:12:44 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:20.653 03:12:44 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:20.653 03:12:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.653 ************************************ 00:02:20.653 START TEST asan 00:02:20.653 ************************************ 00:02:20.653 using asan 00:02:20.653 03:12:44 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:20.653 00:02:20.653 real 0m0.000s 00:02:20.653 user 0m0.000s 00:02:20.653 sys 0m0.000s 00:02:20.653 03:12:44 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:20.653 03:12:44 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:20.653 ************************************ 00:02:20.653 END TEST asan 00:02:20.653 ************************************ 00:02:20.912 03:12:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:20.912 03:12:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:20.912 03:12:44 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:20.912 03:12:44 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:20.912 03:12:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.912 ************************************ 00:02:20.912 START TEST ubsan 00:02:20.912 ************************************ 00:02:20.912 using ubsan 00:02:20.912 03:12:44 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:20.912 00:02:20.912 real 0m0.000s 00:02:20.912 user 0m0.000s 00:02:20.912 sys 0m0.000s 00:02:20.912 03:12:44 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:20.912 03:12:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:20.912 ************************************ 00:02:20.912 END TEST ubsan 00:02:20.912 ************************************ 00:02:20.912 03:12:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:20.912 03:12:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:20.912 03:12:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:20.912 03:12:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:20.912 03:12:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:20.912 03:12:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:20.912 03:12:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:20.912 03:12:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:20.912 03:12:44 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:20.912 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:20.912 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:21.481 Using 'verbs' RDMA provider 00:02:37.761 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:52.644 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:53.213 Creating mk/config.mk...done. 00:02:53.213 Creating mk/cc.flags.mk...done. 00:02:53.213 Type 'make' to build. 00:02:53.213 03:13:16 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:53.213 03:13:16 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:53.213 03:13:16 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:53.213 03:13:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.213 ************************************ 00:02:53.213 START TEST make 00:02:53.213 ************************************ 00:02:53.213 03:13:16 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:53.780 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:53.780 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:53.780 meson setup builddir \ 00:02:53.780 -Dwith-libaio=enabled \ 00:02:53.780 -Dwith-liburing=enabled \ 00:02:53.780 -Dwith-libvfn=disabled \ 00:02:53.780 -Dwith-spdk=disabled \ 00:02:53.780 -Dexamples=false \ 00:02:53.780 -Dtests=false \ 00:02:53.780 -Dtools=false && \ 00:02:53.780 meson compile -C builddir && \ 00:02:53.780 cd -) 00:02:53.780 make[1]: Nothing to be done for 'all'. 00:02:55.683 The Meson build system 00:02:55.683 Version: 1.5.0 00:02:55.683 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:55.683 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:55.683 Build type: native build 00:02:55.683 Project name: xnvme 00:02:55.683 Project version: 0.7.5 00:02:55.683 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:55.683 C linker for the host machine: cc ld.bfd 2.40-14 00:02:55.683 Host machine cpu family: x86_64 00:02:55.683 Host machine cpu: x86_64 00:02:55.683 Message: host_machine.system: linux 00:02:55.683 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:55.683 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:55.683 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:55.683 Run-time dependency threads found: YES 00:02:55.683 Has header "setupapi.h" : NO 00:02:55.683 Has header "linux/blkzoned.h" : YES 00:02:55.683 Has header "linux/blkzoned.h" : YES (cached) 00:02:55.683 Has header "libaio.h" : YES 00:02:55.683 Library aio found: YES 00:02:55.683 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:55.683 Run-time dependency liburing found: YES 2.2 00:02:55.683 Dependency libvfn skipped: feature with-libvfn disabled 00:02:55.683 Found CMake: /usr/bin/cmake (3.27.7) 00:02:55.683 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:55.683 Subproject spdk : skipped: feature with-spdk disabled 00:02:55.683 Run-time dependency appleframeworks found: NO (tried framework) 00:02:55.683 Run-time dependency appleframeworks found: NO (tried framework) 00:02:55.683 Library rt found: YES 00:02:55.683 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:55.683 Configuring xnvme_config.h using configuration 00:02:55.683 Configuring xnvme.spec using configuration 00:02:55.683 Run-time dependency bash-completion found: YES 2.11 00:02:55.683 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:55.683 Program cp found: YES (/usr/bin/cp) 00:02:55.683 Build targets in project: 3 00:02:55.683 00:02:55.683 xnvme 0.7.5 00:02:55.683 00:02:55.683 Subprojects 00:02:55.683 spdk : NO Feature 'with-spdk' disabled 00:02:55.683 00:02:55.683 User defined options 00:02:55.683 examples : false 00:02:55.683 tests : false 00:02:55.683 tools : false 00:02:55.683 with-libaio : enabled 00:02:55.683 with-liburing: enabled 00:02:55.683 with-libvfn : disabled 00:02:55.683 with-spdk : disabled 00:02:55.683 00:02:55.683 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:55.941 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:56.200 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:56.200 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:56.200 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:56.200 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:56.200 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:56.200 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:56.200 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:56.200 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:56.200 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:56.200 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:56.200 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:56.200 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:56.200 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:56.200 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:56.200 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:56.200 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:56.200 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:56.458 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:56.458 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:56.458 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:56.458 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:56.458 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:56.458 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:56.458 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:56.458 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:56.458 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:56.458 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:56.458 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:56.459 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:56.459 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:56.459 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:56.459 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:56.459 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:56.459 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:56.459 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:56.459 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:56.459 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:56.459 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:56.459 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:56.459 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:56.459 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:56.459 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:56.459 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:56.459 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:56.459 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:56.459 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:56.459 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:56.459 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:56.459 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:56.459 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:56.459 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:56.718 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:56.718 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:56.718 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:56.718 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:56.718 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:56.718 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:56.718 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:56.718 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:56.718 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:56.718 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:56.718 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:56.718 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:56.718 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:56.718 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:56.718 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:56.718 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:56.718 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:56.718 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:56.976 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:56.976 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:56.976 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:56.977 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:57.248 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:57.248 [75/76] Linking static target lib/libxnvme.a 00:02:57.248 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:57.248 INFO: autodetecting backend as ninja 00:02:57.248 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:57.248 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:05.363 The Meson build system 00:03:05.363 Version: 1.5.0 00:03:05.363 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:05.363 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:05.363 Build type: native build 00:03:05.363 Program cat found: YES (/usr/bin/cat) 00:03:05.364 Project name: DPDK 00:03:05.364 Project version: 24.03.0 00:03:05.364 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:05.364 C linker for the host machine: cc ld.bfd 2.40-14 00:03:05.364 Host machine cpu family: x86_64 00:03:05.364 Host machine cpu: x86_64 00:03:05.364 Message: ## Building in Developer Mode ## 00:03:05.364 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:05.364 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:05.364 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:05.364 Program python3 found: YES (/usr/bin/python3) 00:03:05.364 Program cat found: YES (/usr/bin/cat) 00:03:05.364 Compiler for C supports arguments -march=native: YES 00:03:05.364 Checking for size of "void *" : 8 00:03:05.364 Checking for size of "void *" : 8 (cached) 00:03:05.364 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:05.364 Library m found: YES 00:03:05.364 Library numa found: YES 00:03:05.364 Has header "numaif.h" : YES 00:03:05.364 Library fdt found: NO 00:03:05.364 Library execinfo found: NO 00:03:05.364 Has header "execinfo.h" : YES 00:03:05.364 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:05.364 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:05.364 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:05.364 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:05.364 Run-time dependency openssl found: YES 3.1.1 00:03:05.364 Run-time dependency libpcap found: YES 1.10.4 00:03:05.364 Has header "pcap.h" with dependency libpcap: YES 00:03:05.364 Compiler for C supports arguments -Wcast-qual: YES 00:03:05.364 Compiler for C supports arguments -Wdeprecated: YES 00:03:05.364 Compiler for C supports arguments -Wformat: YES 00:03:05.364 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:05.364 Compiler for C supports arguments -Wformat-security: NO 00:03:05.364 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:05.364 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:05.364 Compiler for C supports arguments -Wnested-externs: YES 00:03:05.364 Compiler for C supports arguments -Wold-style-definition: YES 00:03:05.364 Compiler for C supports arguments -Wpointer-arith: YES 00:03:05.364 Compiler for C supports arguments -Wsign-compare: YES 00:03:05.364 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:05.364 Compiler for C supports arguments -Wundef: YES 00:03:05.364 Compiler for C supports arguments -Wwrite-strings: YES 00:03:05.364 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:05.364 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:05.364 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:05.364 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:05.364 Program objdump found: YES (/usr/bin/objdump) 00:03:05.364 Compiler for C supports arguments -mavx512f: YES 00:03:05.364 Checking if "AVX512 checking" compiles: YES 00:03:05.364 Fetching value of define "__SSE4_2__" : 1 00:03:05.364 Fetching value of define "__AES__" : 1 00:03:05.364 Fetching value of define "__AVX__" : 1 00:03:05.364 Fetching value of define "__AVX2__" : 1 00:03:05.364 Fetching value of define "__AVX512BW__" : 1 00:03:05.364 Fetching value of define "__AVX512CD__" : 1 00:03:05.364 Fetching value of define "__AVX512DQ__" : 1 00:03:05.364 Fetching value of define "__AVX512F__" : 1 00:03:05.364 Fetching value of define "__AVX512VL__" : 1 00:03:05.364 Fetching value of define "__PCLMUL__" : 1 00:03:05.364 Fetching value of define "__RDRND__" : 1 00:03:05.364 Fetching value of define "__RDSEED__" : 1 00:03:05.364 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:05.364 Fetching value of define "__znver1__" : (undefined) 00:03:05.364 Fetching value of define "__znver2__" : (undefined) 00:03:05.364 Fetching value of define "__znver3__" : (undefined) 00:03:05.364 Fetching value of define "__znver4__" : (undefined) 00:03:05.364 Library asan found: YES 00:03:05.364 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:05.364 Message: lib/log: Defining dependency "log" 00:03:05.364 Message: lib/kvargs: Defining dependency "kvargs" 00:03:05.364 Message: lib/telemetry: Defining dependency "telemetry" 00:03:05.364 Library rt found: YES 00:03:05.364 Checking for function "getentropy" : NO 00:03:05.364 Message: lib/eal: Defining dependency "eal" 00:03:05.364 Message: lib/ring: Defining dependency "ring" 00:03:05.364 Message: lib/rcu: Defining dependency "rcu" 00:03:05.364 Message: lib/mempool: Defining dependency "mempool" 00:03:05.364 Message: lib/mbuf: Defining dependency "mbuf" 00:03:05.364 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:05.364 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:05.364 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:05.364 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:05.364 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:05.364 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:05.364 Compiler for C supports arguments -mpclmul: YES 00:03:05.364 Compiler for C supports arguments -maes: YES 00:03:05.364 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:05.364 Compiler for C supports arguments -mavx512bw: YES 00:03:05.364 Compiler for C supports arguments -mavx512dq: YES 00:03:05.364 Compiler for C supports arguments -mavx512vl: YES 00:03:05.364 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:05.364 Compiler for C supports arguments -mavx2: YES 00:03:05.364 Compiler for C supports arguments -mavx: YES 00:03:05.364 Message: lib/net: Defining dependency "net" 00:03:05.364 Message: lib/meter: Defining dependency "meter" 00:03:05.364 Message: lib/ethdev: Defining dependency "ethdev" 00:03:05.364 Message: lib/pci: Defining dependency "pci" 00:03:05.364 Message: lib/cmdline: Defining dependency "cmdline" 00:03:05.364 Message: lib/hash: Defining dependency "hash" 00:03:05.364 Message: lib/timer: Defining dependency "timer" 00:03:05.364 Message: lib/compressdev: Defining dependency "compressdev" 00:03:05.364 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:05.364 Message: lib/dmadev: Defining dependency "dmadev" 00:03:05.364 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:05.364 Message: lib/power: Defining dependency "power" 00:03:05.364 Message: lib/reorder: Defining dependency "reorder" 00:03:05.364 Message: lib/security: Defining dependency "security" 00:03:05.364 Has header "linux/userfaultfd.h" : YES 00:03:05.364 Has header "linux/vduse.h" : YES 00:03:05.364 Message: lib/vhost: Defining dependency "vhost" 00:03:05.364 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:05.364 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:05.364 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:05.364 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:05.364 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:05.364 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:05.364 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:05.364 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:05.364 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:05.364 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:05.364 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:05.364 Configuring doxy-api-html.conf using configuration 00:03:05.364 Configuring doxy-api-man.conf using configuration 00:03:05.364 Program mandb found: YES (/usr/bin/mandb) 00:03:05.364 Program sphinx-build found: NO 00:03:05.364 Configuring rte_build_config.h using configuration 00:03:05.364 Message: 00:03:05.364 ================= 00:03:05.364 Applications Enabled 00:03:05.364 ================= 00:03:05.364 00:03:05.364 apps: 00:03:05.364 00:03:05.364 00:03:05.364 Message: 00:03:05.364 ================= 00:03:05.364 Libraries Enabled 00:03:05.364 ================= 00:03:05.364 00:03:05.364 libs: 00:03:05.364 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:05.364 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:05.364 cryptodev, dmadev, power, reorder, security, vhost, 00:03:05.364 00:03:05.364 Message: 00:03:05.364 =============== 00:03:05.364 Drivers Enabled 00:03:05.364 =============== 00:03:05.364 00:03:05.364 common: 00:03:05.364 00:03:05.364 bus: 00:03:05.364 pci, vdev, 00:03:05.364 mempool: 00:03:05.364 ring, 00:03:05.364 dma: 00:03:05.364 00:03:05.364 net: 00:03:05.364 00:03:05.364 crypto: 00:03:05.364 00:03:05.364 compress: 00:03:05.364 00:03:05.364 vdpa: 00:03:05.364 00:03:05.364 00:03:05.364 Message: 00:03:05.364 ================= 00:03:05.364 Content Skipped 00:03:05.364 ================= 00:03:05.364 00:03:05.364 apps: 00:03:05.364 dumpcap: explicitly disabled via build config 00:03:05.364 graph: explicitly disabled via build config 00:03:05.364 pdump: explicitly disabled via build config 00:03:05.364 proc-info: explicitly disabled via build config 00:03:05.364 test-acl: explicitly disabled via build config 00:03:05.364 test-bbdev: explicitly disabled via build config 00:03:05.364 test-cmdline: explicitly disabled via build config 00:03:05.364 test-compress-perf: explicitly disabled via build config 00:03:05.364 test-crypto-perf: explicitly disabled via build config 00:03:05.364 test-dma-perf: explicitly disabled via build config 00:03:05.364 test-eventdev: explicitly disabled via build config 00:03:05.364 test-fib: explicitly disabled via build config 00:03:05.364 test-flow-perf: explicitly disabled via build config 00:03:05.364 test-gpudev: explicitly disabled via build config 00:03:05.364 test-mldev: explicitly disabled via build config 00:03:05.364 test-pipeline: explicitly disabled via build config 00:03:05.364 test-pmd: explicitly disabled via build config 00:03:05.364 test-regex: explicitly disabled via build config 00:03:05.364 test-sad: explicitly disabled via build config 00:03:05.364 test-security-perf: explicitly disabled via build config 00:03:05.364 00:03:05.364 libs: 00:03:05.364 argparse: explicitly disabled via build config 00:03:05.364 metrics: explicitly disabled via build config 00:03:05.364 acl: explicitly disabled via build config 00:03:05.364 bbdev: explicitly disabled via build config 00:03:05.364 bitratestats: explicitly disabled via build config 00:03:05.364 bpf: explicitly disabled via build config 00:03:05.364 cfgfile: explicitly disabled via build config 00:03:05.364 distributor: explicitly disabled via build config 00:03:05.364 efd: explicitly disabled via build config 00:03:05.364 eventdev: explicitly disabled via build config 00:03:05.364 dispatcher: explicitly disabled via build config 00:03:05.364 gpudev: explicitly disabled via build config 00:03:05.364 gro: explicitly disabled via build config 00:03:05.364 gso: explicitly disabled via build config 00:03:05.364 ip_frag: explicitly disabled via build config 00:03:05.364 jobstats: explicitly disabled via build config 00:03:05.364 latencystats: explicitly disabled via build config 00:03:05.364 lpm: explicitly disabled via build config 00:03:05.364 member: explicitly disabled via build config 00:03:05.364 pcapng: explicitly disabled via build config 00:03:05.364 rawdev: explicitly disabled via build config 00:03:05.364 regexdev: explicitly disabled via build config 00:03:05.364 mldev: explicitly disabled via build config 00:03:05.365 rib: explicitly disabled via build config 00:03:05.365 sched: explicitly disabled via build config 00:03:05.365 stack: explicitly disabled via build config 00:03:05.365 ipsec: explicitly disabled via build config 00:03:05.365 pdcp: explicitly disabled via build config 00:03:05.365 fib: explicitly disabled via build config 00:03:05.365 port: explicitly disabled via build config 00:03:05.365 pdump: explicitly disabled via build config 00:03:05.365 table: explicitly disabled via build config 00:03:05.365 pipeline: explicitly disabled via build config 00:03:05.365 graph: explicitly disabled via build config 00:03:05.365 node: explicitly disabled via build config 00:03:05.365 00:03:05.365 drivers: 00:03:05.365 common/cpt: not in enabled drivers build config 00:03:05.365 common/dpaax: not in enabled drivers build config 00:03:05.365 common/iavf: not in enabled drivers build config 00:03:05.365 common/idpf: not in enabled drivers build config 00:03:05.365 common/ionic: not in enabled drivers build config 00:03:05.365 common/mvep: not in enabled drivers build config 00:03:05.365 common/octeontx: not in enabled drivers build config 00:03:05.365 bus/auxiliary: not in enabled drivers build config 00:03:05.365 bus/cdx: not in enabled drivers build config 00:03:05.365 bus/dpaa: not in enabled drivers build config 00:03:05.365 bus/fslmc: not in enabled drivers build config 00:03:05.365 bus/ifpga: not in enabled drivers build config 00:03:05.365 bus/platform: not in enabled drivers build config 00:03:05.365 bus/uacce: not in enabled drivers build config 00:03:05.365 bus/vmbus: not in enabled drivers build config 00:03:05.365 common/cnxk: not in enabled drivers build config 00:03:05.365 common/mlx5: not in enabled drivers build config 00:03:05.365 common/nfp: not in enabled drivers build config 00:03:05.365 common/nitrox: not in enabled drivers build config 00:03:05.365 common/qat: not in enabled drivers build config 00:03:05.365 common/sfc_efx: not in enabled drivers build config 00:03:05.365 mempool/bucket: not in enabled drivers build config 00:03:05.365 mempool/cnxk: not in enabled drivers build config 00:03:05.365 mempool/dpaa: not in enabled drivers build config 00:03:05.365 mempool/dpaa2: not in enabled drivers build config 00:03:05.365 mempool/octeontx: not in enabled drivers build config 00:03:05.365 mempool/stack: not in enabled drivers build config 00:03:05.365 dma/cnxk: not in enabled drivers build config 00:03:05.365 dma/dpaa: not in enabled drivers build config 00:03:05.365 dma/dpaa2: not in enabled drivers build config 00:03:05.365 dma/hisilicon: not in enabled drivers build config 00:03:05.365 dma/idxd: not in enabled drivers build config 00:03:05.365 dma/ioat: not in enabled drivers build config 00:03:05.365 dma/skeleton: not in enabled drivers build config 00:03:05.365 net/af_packet: not in enabled drivers build config 00:03:05.365 net/af_xdp: not in enabled drivers build config 00:03:05.365 net/ark: not in enabled drivers build config 00:03:05.365 net/atlantic: not in enabled drivers build config 00:03:05.365 net/avp: not in enabled drivers build config 00:03:05.365 net/axgbe: not in enabled drivers build config 00:03:05.365 net/bnx2x: not in enabled drivers build config 00:03:05.365 net/bnxt: not in enabled drivers build config 00:03:05.365 net/bonding: not in enabled drivers build config 00:03:05.365 net/cnxk: not in enabled drivers build config 00:03:05.365 net/cpfl: not in enabled drivers build config 00:03:05.365 net/cxgbe: not in enabled drivers build config 00:03:05.365 net/dpaa: not in enabled drivers build config 00:03:05.365 net/dpaa2: not in enabled drivers build config 00:03:05.365 net/e1000: not in enabled drivers build config 00:03:05.365 net/ena: not in enabled drivers build config 00:03:05.365 net/enetc: not in enabled drivers build config 00:03:05.365 net/enetfec: not in enabled drivers build config 00:03:05.365 net/enic: not in enabled drivers build config 00:03:05.365 net/failsafe: not in enabled drivers build config 00:03:05.365 net/fm10k: not in enabled drivers build config 00:03:05.365 net/gve: not in enabled drivers build config 00:03:05.365 net/hinic: not in enabled drivers build config 00:03:05.365 net/hns3: not in enabled drivers build config 00:03:05.365 net/i40e: not in enabled drivers build config 00:03:05.365 net/iavf: not in enabled drivers build config 00:03:05.365 net/ice: not in enabled drivers build config 00:03:05.365 net/idpf: not in enabled drivers build config 00:03:05.365 net/igc: not in enabled drivers build config 00:03:05.365 net/ionic: not in enabled drivers build config 00:03:05.365 net/ipn3ke: not in enabled drivers build config 00:03:05.365 net/ixgbe: not in enabled drivers build config 00:03:05.365 net/mana: not in enabled drivers build config 00:03:05.365 net/memif: not in enabled drivers build config 00:03:05.365 net/mlx4: not in enabled drivers build config 00:03:05.365 net/mlx5: not in enabled drivers build config 00:03:05.365 net/mvneta: not in enabled drivers build config 00:03:05.365 net/mvpp2: not in enabled drivers build config 00:03:05.365 net/netvsc: not in enabled drivers build config 00:03:05.365 net/nfb: not in enabled drivers build config 00:03:05.365 net/nfp: not in enabled drivers build config 00:03:05.365 net/ngbe: not in enabled drivers build config 00:03:05.365 net/null: not in enabled drivers build config 00:03:05.365 net/octeontx: not in enabled drivers build config 00:03:05.365 net/octeon_ep: not in enabled drivers build config 00:03:05.365 net/pcap: not in enabled drivers build config 00:03:05.365 net/pfe: not in enabled drivers build config 00:03:05.365 net/qede: not in enabled drivers build config 00:03:05.365 net/ring: not in enabled drivers build config 00:03:05.365 net/sfc: not in enabled drivers build config 00:03:05.365 net/softnic: not in enabled drivers build config 00:03:05.365 net/tap: not in enabled drivers build config 00:03:05.365 net/thunderx: not in enabled drivers build config 00:03:05.365 net/txgbe: not in enabled drivers build config 00:03:05.365 net/vdev_netvsc: not in enabled drivers build config 00:03:05.365 net/vhost: not in enabled drivers build config 00:03:05.365 net/virtio: not in enabled drivers build config 00:03:05.365 net/vmxnet3: not in enabled drivers build config 00:03:05.365 raw/*: missing internal dependency, "rawdev" 00:03:05.365 crypto/armv8: not in enabled drivers build config 00:03:05.365 crypto/bcmfs: not in enabled drivers build config 00:03:05.365 crypto/caam_jr: not in enabled drivers build config 00:03:05.365 crypto/ccp: not in enabled drivers build config 00:03:05.365 crypto/cnxk: not in enabled drivers build config 00:03:05.365 crypto/dpaa_sec: not in enabled drivers build config 00:03:05.365 crypto/dpaa2_sec: not in enabled drivers build config 00:03:05.365 crypto/ipsec_mb: not in enabled drivers build config 00:03:05.365 crypto/mlx5: not in enabled drivers build config 00:03:05.365 crypto/mvsam: not in enabled drivers build config 00:03:05.365 crypto/nitrox: not in enabled drivers build config 00:03:05.365 crypto/null: not in enabled drivers build config 00:03:05.365 crypto/octeontx: not in enabled drivers build config 00:03:05.365 crypto/openssl: not in enabled drivers build config 00:03:05.365 crypto/scheduler: not in enabled drivers build config 00:03:05.365 crypto/uadk: not in enabled drivers build config 00:03:05.365 crypto/virtio: not in enabled drivers build config 00:03:05.365 compress/isal: not in enabled drivers build config 00:03:05.365 compress/mlx5: not in enabled drivers build config 00:03:05.365 compress/nitrox: not in enabled drivers build config 00:03:05.365 compress/octeontx: not in enabled drivers build config 00:03:05.365 compress/zlib: not in enabled drivers build config 00:03:05.365 regex/*: missing internal dependency, "regexdev" 00:03:05.365 ml/*: missing internal dependency, "mldev" 00:03:05.365 vdpa/ifc: not in enabled drivers build config 00:03:05.365 vdpa/mlx5: not in enabled drivers build config 00:03:05.365 vdpa/nfp: not in enabled drivers build config 00:03:05.365 vdpa/sfc: not in enabled drivers build config 00:03:05.365 event/*: missing internal dependency, "eventdev" 00:03:05.365 baseband/*: missing internal dependency, "bbdev" 00:03:05.365 gpu/*: missing internal dependency, "gpudev" 00:03:05.365 00:03:05.365 00:03:05.365 Build targets in project: 85 00:03:05.365 00:03:05.365 DPDK 24.03.0 00:03:05.365 00:03:05.365 User defined options 00:03:05.365 buildtype : debug 00:03:05.365 default_library : shared 00:03:05.365 libdir : lib 00:03:05.365 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:05.365 b_sanitize : address 00:03:05.365 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:05.365 c_link_args : 00:03:05.365 cpu_instruction_set: native 00:03:05.365 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:05.365 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:05.365 enable_docs : false 00:03:05.365 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:05.365 enable_kmods : false 00:03:05.365 max_lcores : 128 00:03:05.365 tests : false 00:03:05.365 00:03:05.365 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:05.365 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:05.365 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:05.365 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:05.365 [3/268] Linking static target lib/librte_kvargs.a 00:03:05.365 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:05.365 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:05.365 [6/268] Linking static target lib/librte_log.a 00:03:05.933 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:05.933 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:05.933 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:05.933 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:05.933 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:05.933 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:05.933 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:05.933 [14/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.933 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:06.191 [16/268] Linking static target lib/librte_telemetry.a 00:03:06.191 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:06.191 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:06.449 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:06.449 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:06.449 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:06.449 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:06.449 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:06.449 [24/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.449 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:06.449 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:06.707 [27/268] Linking target lib/librte_log.so.24.1 00:03:06.707 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:06.707 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:06.707 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:06.966 [31/268] Linking target lib/librte_kvargs.so.24.1 00:03:06.966 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:06.966 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:06.966 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.966 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:07.225 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:07.225 [37/268] Linking target lib/librte_telemetry.so.24.1 00:03:07.225 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:07.225 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:07.225 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:07.225 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:07.225 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:07.225 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:07.225 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:07.225 [45/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:07.483 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:07.483 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:07.483 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:07.743 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:07.743 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:07.743 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:07.743 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:08.001 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:08.001 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:08.001 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:08.001 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:08.001 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:08.001 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:08.260 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:08.260 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:08.260 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:08.260 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:08.260 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:08.518 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:08.518 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:08.518 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:08.518 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:08.778 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:08.778 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:08.778 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:09.037 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:09.037 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:09.037 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:09.037 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:09.037 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:09.037 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:09.037 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:09.037 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:09.037 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:09.295 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:09.295 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:09.295 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:09.295 [83/268] Linking static target lib/librte_ring.a 00:03:09.554 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:09.554 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:09.554 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:09.554 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:09.554 [88/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:09.554 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:09.554 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:09.812 [91/268] Linking static target lib/librte_rcu.a 00:03:09.812 [92/268] Linking static target lib/librte_eal.a 00:03:09.812 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:09.812 [94/268] Linking static target lib/librte_mempool.a 00:03:10.078 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.078 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:10.078 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:10.078 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:10.337 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.337 [100/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:10.337 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:10.337 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:10.337 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:10.337 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:10.337 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:10.595 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:10.595 [107/268] Linking static target lib/librte_net.a 00:03:10.595 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:10.595 [109/268] Linking static target lib/librte_meter.a 00:03:10.854 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:10.854 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:10.854 [112/268] Linking static target lib/librte_mbuf.a 00:03:10.854 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:10.854 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.854 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:11.113 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.113 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:11.113 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.371 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:11.630 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:11.630 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:11.630 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:11.889 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:11.889 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:11.889 [125/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.889 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:12.147 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:12.147 [128/268] Linking static target lib/librte_pci.a 00:03:12.147 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:12.147 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:12.147 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:12.147 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:12.147 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:12.147 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:12.406 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:12.406 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:12.406 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:12.406 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:12.406 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:12.406 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:12.406 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:12.406 [142/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.406 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:12.406 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:12.666 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:12.666 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:12.666 [147/268] Linking static target lib/librte_cmdline.a 00:03:12.924 [148/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:12.924 [149/268] Linking static target lib/librte_timer.a 00:03:12.924 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:12.924 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:12.924 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:13.183 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:13.183 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:13.183 [155/268] Linking static target lib/librte_ethdev.a 00:03:13.183 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:13.442 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:13.442 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:13.442 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.700 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:13.700 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:13.700 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:13.700 [163/268] Linking static target lib/librte_compressdev.a 00:03:13.700 [164/268] Linking static target lib/librte_hash.a 00:03:13.700 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:13.700 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:13.959 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:13.959 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:13.959 [169/268] Linking static target lib/librte_dmadev.a 00:03:14.218 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:14.218 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:14.218 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:14.476 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:14.477 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.735 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.735 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:14.735 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:14.735 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:14.735 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:14.994 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:14.994 [181/268] Linking static target lib/librte_cryptodev.a 00:03:14.994 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.994 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:14.994 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.252 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:15.252 [186/268] Linking static target lib/librte_power.a 00:03:15.252 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:15.252 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:15.511 [189/268] Linking static target lib/librte_reorder.a 00:03:15.511 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:15.769 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:15.769 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:15.769 [193/268] Linking static target lib/librte_security.a 00:03:16.027 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.027 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:16.290 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.546 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:16.547 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:16.547 [199/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.547 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:16.547 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:16.805 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:17.063 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:17.063 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:17.063 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:17.321 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:17.321 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:17.321 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:17.321 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:17.321 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:17.579 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.579 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:17.579 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:17.579 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:17.579 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:17.579 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:17.579 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:17.579 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:17.579 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:17.838 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:17.838 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:17.838 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:17.838 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:17.838 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:17.838 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:18.097 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.356 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.924 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:23.147 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.147 [230/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:23.147 [231/268] Linking target lib/librte_eal.so.24.1 00:03:23.147 [232/268] Linking static target lib/librte_vhost.a 00:03:23.147 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:23.147 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:23.147 [235/268] Linking target lib/librte_meter.so.24.1 00:03:23.147 [236/268] Linking target lib/librte_ring.so.24.1 00:03:23.147 [237/268] Linking target lib/librte_pci.so.24.1 00:03:23.147 [238/268] Linking target lib/librte_dmadev.so.24.1 00:03:23.147 [239/268] Linking target lib/librte_timer.so.24.1 00:03:23.147 [240/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.147 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:23.147 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:23.147 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:23.147 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:23.147 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:23.147 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:23.147 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:23.147 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:23.147 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:23.147 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:23.407 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:23.407 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:23.407 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:23.407 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:23.407 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:23.407 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:23.407 [257/268] Linking target lib/librte_net.so.24.1 00:03:23.666 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:23.666 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:23.666 [260/268] Linking target lib/librte_hash.so.24.1 00:03:23.666 [261/268] Linking target lib/librte_security.so.24.1 00:03:23.666 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:23.666 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:23.925 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:23.925 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:23.925 [266/268] Linking target lib/librte_power.so.24.1 00:03:24.860 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.119 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:25.119 INFO: autodetecting backend as ninja 00:03:25.119 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:43.210 CC lib/ut/ut.o 00:03:43.210 CC lib/log/log.o 00:03:43.210 CC lib/log/log_flags.o 00:03:43.210 CC lib/log/log_deprecated.o 00:03:43.210 CC lib/ut_mock/mock.o 00:03:43.210 LIB libspdk_ut.a 00:03:43.210 LIB libspdk_log.a 00:03:43.210 LIB libspdk_ut_mock.a 00:03:43.210 SO libspdk_ut.so.2.0 00:03:43.210 SO libspdk_log.so.7.1 00:03:43.210 SO libspdk_ut_mock.so.6.0 00:03:43.210 SYMLINK libspdk_ut.so 00:03:43.210 SYMLINK libspdk_log.so 00:03:43.210 SYMLINK libspdk_ut_mock.so 00:03:43.210 CC lib/ioat/ioat.o 00:03:43.210 CC lib/util/base64.o 00:03:43.210 CC lib/util/bit_array.o 00:03:43.210 CC lib/util/cpuset.o 00:03:43.210 CC lib/util/crc32c.o 00:03:43.210 CC lib/util/crc16.o 00:03:43.210 CC lib/util/crc32.o 00:03:43.210 CC lib/dma/dma.o 00:03:43.210 CXX lib/trace_parser/trace.o 00:03:43.210 CC lib/util/crc32_ieee.o 00:03:43.210 CC lib/util/crc64.o 00:03:43.210 CC lib/util/dif.o 00:03:43.210 CC lib/vfio_user/host/vfio_user_pci.o 00:03:43.210 CC lib/vfio_user/host/vfio_user.o 00:03:43.210 CC lib/util/fd.o 00:03:43.470 CC lib/util/fd_group.o 00:03:43.470 LIB libspdk_dma.a 00:03:43.470 SO libspdk_dma.so.5.0 00:03:43.470 CC lib/util/file.o 00:03:43.470 LIB libspdk_ioat.a 00:03:43.470 SO libspdk_ioat.so.7.0 00:03:43.470 CC lib/util/hexlify.o 00:03:43.470 SYMLINK libspdk_dma.so 00:03:43.470 CC lib/util/iov.o 00:03:43.470 CC lib/util/math.o 00:03:43.470 SYMLINK libspdk_ioat.so 00:03:43.470 CC lib/util/net.o 00:03:43.470 CC lib/util/pipe.o 00:03:43.470 CC lib/util/strerror_tls.o 00:03:43.470 LIB libspdk_vfio_user.a 00:03:43.470 CC lib/util/string.o 00:03:43.731 SO libspdk_vfio_user.so.5.0 00:03:43.731 CC lib/util/uuid.o 00:03:43.731 CC lib/util/xor.o 00:03:43.731 CC lib/util/zipf.o 00:03:43.731 CC lib/util/md5.o 00:03:43.731 SYMLINK libspdk_vfio_user.so 00:03:43.990 LIB libspdk_util.a 00:03:44.249 SO libspdk_util.so.10.1 00:03:44.249 LIB libspdk_trace_parser.a 00:03:44.249 SO libspdk_trace_parser.so.6.0 00:03:44.508 SYMLINK libspdk_util.so 00:03:44.508 SYMLINK libspdk_trace_parser.so 00:03:44.508 CC lib/idxd/idxd.o 00:03:44.508 CC lib/idxd/idxd_kernel.o 00:03:44.508 CC lib/idxd/idxd_user.o 00:03:44.508 CC lib/vmd/vmd.o 00:03:44.508 CC lib/vmd/led.o 00:03:44.508 CC lib/conf/conf.o 00:03:44.508 CC lib/json/json_parse.o 00:03:44.508 CC lib/json/json_util.o 00:03:44.508 CC lib/rdma_utils/rdma_utils.o 00:03:44.767 CC lib/env_dpdk/env.o 00:03:44.767 CC lib/env_dpdk/memory.o 00:03:44.767 CC lib/env_dpdk/pci.o 00:03:44.767 LIB libspdk_conf.a 00:03:44.767 CC lib/json/json_write.o 00:03:44.767 CC lib/env_dpdk/init.o 00:03:44.767 SO libspdk_conf.so.6.0 00:03:45.029 CC lib/env_dpdk/threads.o 00:03:45.029 LIB libspdk_rdma_utils.a 00:03:45.029 SYMLINK libspdk_conf.so 00:03:45.029 SO libspdk_rdma_utils.so.1.0 00:03:45.029 CC lib/env_dpdk/pci_ioat.o 00:03:45.029 SYMLINK libspdk_rdma_utils.so 00:03:45.029 CC lib/env_dpdk/pci_virtio.o 00:03:45.029 CC lib/env_dpdk/pci_vmd.o 00:03:45.029 CC lib/env_dpdk/pci_idxd.o 00:03:45.029 CC lib/env_dpdk/pci_event.o 00:03:45.287 LIB libspdk_json.a 00:03:45.287 CC lib/env_dpdk/sigbus_handler.o 00:03:45.287 CC lib/env_dpdk/pci_dpdk.o 00:03:45.287 SO libspdk_json.so.6.0 00:03:45.287 LIB libspdk_idxd.a 00:03:45.287 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:45.287 SYMLINK libspdk_json.so 00:03:45.287 CC lib/rdma_provider/common.o 00:03:45.287 SO libspdk_idxd.so.12.1 00:03:45.287 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:45.287 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:45.287 LIB libspdk_vmd.a 00:03:45.287 SYMLINK libspdk_idxd.so 00:03:45.288 SO libspdk_vmd.so.6.0 00:03:45.547 SYMLINK libspdk_vmd.so 00:03:45.547 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:45.547 CC lib/jsonrpc/jsonrpc_client.o 00:03:45.547 CC lib/jsonrpc/jsonrpc_server.o 00:03:45.547 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:45.547 LIB libspdk_rdma_provider.a 00:03:45.547 SO libspdk_rdma_provider.so.7.0 00:03:45.806 SYMLINK libspdk_rdma_provider.so 00:03:45.806 LIB libspdk_jsonrpc.a 00:03:45.806 SO libspdk_jsonrpc.so.6.0 00:03:46.065 SYMLINK libspdk_jsonrpc.so 00:03:46.324 LIB libspdk_env_dpdk.a 00:03:46.324 CC lib/rpc/rpc.o 00:03:46.324 SO libspdk_env_dpdk.so.15.1 00:03:46.583 SYMLINK libspdk_env_dpdk.so 00:03:46.583 LIB libspdk_rpc.a 00:03:46.583 SO libspdk_rpc.so.6.0 00:03:46.843 SYMLINK libspdk_rpc.so 00:03:47.102 CC lib/notify/notify.o 00:03:47.102 CC lib/notify/notify_rpc.o 00:03:47.102 CC lib/keyring/keyring.o 00:03:47.102 CC lib/keyring/keyring_rpc.o 00:03:47.102 CC lib/trace/trace_flags.o 00:03:47.102 CC lib/trace/trace_rpc.o 00:03:47.102 CC lib/trace/trace.o 00:03:47.361 LIB libspdk_notify.a 00:03:47.361 SO libspdk_notify.so.6.0 00:03:47.361 LIB libspdk_keyring.a 00:03:47.361 LIB libspdk_trace.a 00:03:47.362 SYMLINK libspdk_notify.so 00:03:47.362 SO libspdk_keyring.so.2.0 00:03:47.362 SO libspdk_trace.so.11.0 00:03:47.621 SYMLINK libspdk_keyring.so 00:03:47.621 SYMLINK libspdk_trace.so 00:03:47.881 CC lib/thread/thread.o 00:03:47.881 CC lib/thread/iobuf.o 00:03:47.881 CC lib/sock/sock.o 00:03:47.881 CC lib/sock/sock_rpc.o 00:03:48.449 LIB libspdk_sock.a 00:03:48.449 SO libspdk_sock.so.10.0 00:03:48.449 SYMLINK libspdk_sock.so 00:03:49.017 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:49.017 CC lib/nvme/nvme_fabric.o 00:03:49.017 CC lib/nvme/nvme_ctrlr.o 00:03:49.017 CC lib/nvme/nvme_ns_cmd.o 00:03:49.017 CC lib/nvme/nvme_ns.o 00:03:49.017 CC lib/nvme/nvme_pcie_common.o 00:03:49.017 CC lib/nvme/nvme_pcie.o 00:03:49.017 CC lib/nvme/nvme_qpair.o 00:03:49.017 CC lib/nvme/nvme.o 00:03:49.585 CC lib/nvme/nvme_quirks.o 00:03:49.585 LIB libspdk_thread.a 00:03:49.585 CC lib/nvme/nvme_transport.o 00:03:49.585 SO libspdk_thread.so.11.0 00:03:49.585 CC lib/nvme/nvme_discovery.o 00:03:49.585 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:49.844 SYMLINK libspdk_thread.so 00:03:49.844 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:49.844 CC lib/nvme/nvme_tcp.o 00:03:49.844 CC lib/nvme/nvme_opal.o 00:03:49.844 CC lib/nvme/nvme_io_msg.o 00:03:50.103 CC lib/nvme/nvme_poll_group.o 00:03:50.103 CC lib/accel/accel.o 00:03:50.363 CC lib/blob/blobstore.o 00:03:50.363 CC lib/init/json_config.o 00:03:50.363 CC lib/init/subsystem.o 00:03:50.363 CC lib/init/subsystem_rpc.o 00:03:50.363 CC lib/init/rpc.o 00:03:50.656 CC lib/virtio/virtio.o 00:03:50.656 CC lib/blob/request.o 00:03:50.656 CC lib/blob/zeroes.o 00:03:50.656 LIB libspdk_init.a 00:03:50.656 CC lib/blob/blob_bs_dev.o 00:03:50.656 SO libspdk_init.so.6.0 00:03:50.656 CC lib/fsdev/fsdev.o 00:03:50.656 SYMLINK libspdk_init.so 00:03:50.656 CC lib/fsdev/fsdev_io.o 00:03:50.915 CC lib/virtio/virtio_vhost_user.o 00:03:50.915 CC lib/virtio/virtio_vfio_user.o 00:03:50.915 CC lib/virtio/virtio_pci.o 00:03:50.915 CC lib/accel/accel_rpc.o 00:03:51.174 CC lib/accel/accel_sw.o 00:03:51.174 CC lib/fsdev/fsdev_rpc.o 00:03:51.174 CC lib/nvme/nvme_zns.o 00:03:51.174 LIB libspdk_virtio.a 00:03:51.174 SO libspdk_virtio.so.7.0 00:03:51.174 CC lib/nvme/nvme_stubs.o 00:03:51.174 CC lib/nvme/nvme_auth.o 00:03:51.433 SYMLINK libspdk_virtio.so 00:03:51.433 CC lib/nvme/nvme_cuse.o 00:03:51.433 CC lib/nvme/nvme_rdma.o 00:03:51.433 CC lib/event/app.o 00:03:51.433 CC lib/event/reactor.o 00:03:51.433 LIB libspdk_fsdev.a 00:03:51.433 LIB libspdk_accel.a 00:03:51.433 SO libspdk_fsdev.so.2.0 00:03:51.433 SO libspdk_accel.so.16.0 00:03:51.692 SYMLINK libspdk_fsdev.so 00:03:51.692 SYMLINK libspdk_accel.so 00:03:51.692 CC lib/event/log_rpc.o 00:03:51.692 CC lib/event/app_rpc.o 00:03:51.692 CC lib/event/scheduler_static.o 00:03:51.958 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:52.217 LIB libspdk_event.a 00:03:52.217 CC lib/bdev/bdev.o 00:03:52.217 CC lib/bdev/bdev_zone.o 00:03:52.217 CC lib/bdev/bdev_rpc.o 00:03:52.217 CC lib/bdev/part.o 00:03:52.217 SO libspdk_event.so.14.0 00:03:52.217 CC lib/bdev/scsi_nvme.o 00:03:52.217 SYMLINK libspdk_event.so 00:03:52.786 LIB libspdk_fuse_dispatcher.a 00:03:52.786 SO libspdk_fuse_dispatcher.so.1.0 00:03:52.786 SYMLINK libspdk_fuse_dispatcher.so 00:03:52.786 LIB libspdk_nvme.a 00:03:53.046 SO libspdk_nvme.so.15.0 00:03:53.305 SYMLINK libspdk_nvme.so 00:03:53.875 LIB libspdk_blob.a 00:03:53.876 SO libspdk_blob.so.11.0 00:03:54.136 SYMLINK libspdk_blob.so 00:03:54.396 CC lib/blobfs/blobfs.o 00:03:54.396 CC lib/blobfs/tree.o 00:03:54.396 CC lib/lvol/lvol.o 00:03:55.336 LIB libspdk_bdev.a 00:03:55.336 SO libspdk_bdev.so.17.0 00:03:55.336 LIB libspdk_blobfs.a 00:03:55.336 SO libspdk_blobfs.so.10.0 00:03:55.596 SYMLINK libspdk_bdev.so 00:03:55.596 SYMLINK libspdk_blobfs.so 00:03:55.596 LIB libspdk_lvol.a 00:03:55.596 SO libspdk_lvol.so.10.0 00:03:55.596 SYMLINK libspdk_lvol.so 00:03:55.596 CC lib/scsi/dev.o 00:03:55.596 CC lib/scsi/lun.o 00:03:55.596 CC lib/scsi/port.o 00:03:55.596 CC lib/nbd/nbd.o 00:03:55.596 CC lib/scsi/scsi.o 00:03:55.596 CC lib/scsi/scsi_bdev.o 00:03:55.596 CC lib/nbd/nbd_rpc.o 00:03:55.596 CC lib/nvmf/ctrlr.o 00:03:55.596 CC lib/ublk/ublk.o 00:03:55.596 CC lib/ftl/ftl_core.o 00:03:55.856 CC lib/ftl/ftl_init.o 00:03:55.856 CC lib/ftl/ftl_layout.o 00:03:55.856 CC lib/ftl/ftl_debug.o 00:03:55.856 CC lib/scsi/scsi_pr.o 00:03:56.115 CC lib/ftl/ftl_io.o 00:03:56.115 CC lib/ftl/ftl_sb.o 00:03:56.115 LIB libspdk_nbd.a 00:03:56.115 CC lib/ublk/ublk_rpc.o 00:03:56.115 CC lib/scsi/scsi_rpc.o 00:03:56.115 SO libspdk_nbd.so.7.0 00:03:56.115 CC lib/scsi/task.o 00:03:56.115 SYMLINK libspdk_nbd.so 00:03:56.115 CC lib/ftl/ftl_l2p.o 00:03:56.115 CC lib/nvmf/ctrlr_discovery.o 00:03:56.374 CC lib/nvmf/ctrlr_bdev.o 00:03:56.374 CC lib/ftl/ftl_l2p_flat.o 00:03:56.374 CC lib/ftl/ftl_nv_cache.o 00:03:56.374 CC lib/nvmf/subsystem.o 00:03:56.374 CC lib/ftl/ftl_band.o 00:03:56.374 LIB libspdk_ublk.a 00:03:56.374 SO libspdk_ublk.so.3.0 00:03:56.374 CC lib/ftl/ftl_band_ops.o 00:03:56.374 LIB libspdk_scsi.a 00:03:56.374 CC lib/ftl/ftl_writer.o 00:03:56.374 SYMLINK libspdk_ublk.so 00:03:56.634 CC lib/ftl/ftl_rq.o 00:03:56.634 SO libspdk_scsi.so.9.0 00:03:56.634 SYMLINK libspdk_scsi.so 00:03:56.634 CC lib/ftl/ftl_reloc.o 00:03:56.893 CC lib/ftl/ftl_l2p_cache.o 00:03:56.893 CC lib/ftl/ftl_p2l.o 00:03:56.893 CC lib/ftl/ftl_p2l_log.o 00:03:56.893 CC lib/iscsi/conn.o 00:03:56.893 CC lib/vhost/vhost.o 00:03:57.152 CC lib/vhost/vhost_rpc.o 00:03:57.153 CC lib/vhost/vhost_scsi.o 00:03:57.153 CC lib/ftl/mngt/ftl_mngt.o 00:03:57.153 CC lib/iscsi/init_grp.o 00:03:57.413 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:57.413 CC lib/vhost/rte_vhost_user.o 00:03:57.413 CC lib/vhost/vhost_blk.o 00:03:57.413 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:57.413 CC lib/iscsi/iscsi.o 00:03:57.673 CC lib/iscsi/param.o 00:03:57.673 CC lib/iscsi/portal_grp.o 00:03:57.673 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:57.673 CC lib/nvmf/nvmf.o 00:03:57.673 CC lib/nvmf/nvmf_rpc.o 00:03:57.932 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:57.932 CC lib/nvmf/transport.o 00:03:57.932 CC lib/iscsi/tgt_node.o 00:03:57.932 CC lib/iscsi/iscsi_subsystem.o 00:03:57.932 CC lib/nvmf/tcp.o 00:03:58.191 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:58.479 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:58.479 CC lib/nvmf/stubs.o 00:03:58.479 CC lib/nvmf/mdns_server.o 00:03:58.479 CC lib/nvmf/rdma.o 00:03:58.479 LIB libspdk_vhost.a 00:03:58.479 SO libspdk_vhost.so.8.0 00:03:58.479 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:58.760 SYMLINK libspdk_vhost.so 00:03:58.760 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:58.760 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:58.760 CC lib/nvmf/auth.o 00:03:58.760 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:58.760 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:58.760 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:58.760 CC lib/ftl/utils/ftl_conf.o 00:03:59.020 CC lib/ftl/utils/ftl_md.o 00:03:59.020 CC lib/ftl/utils/ftl_mempool.o 00:03:59.020 CC lib/ftl/utils/ftl_bitmap.o 00:03:59.020 CC lib/ftl/utils/ftl_property.o 00:03:59.020 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:59.020 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:59.280 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:59.280 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:59.280 CC lib/iscsi/iscsi_rpc.o 00:03:59.280 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:59.280 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:59.280 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:59.280 CC lib/iscsi/task.o 00:03:59.280 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:59.540 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:59.540 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:59.540 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:59.540 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:59.540 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:59.540 CC lib/ftl/base/ftl_base_dev.o 00:03:59.540 CC lib/ftl/base/ftl_base_bdev.o 00:03:59.540 CC lib/ftl/ftl_trace.o 00:03:59.540 LIB libspdk_iscsi.a 00:03:59.799 SO libspdk_iscsi.so.8.0 00:03:59.799 SYMLINK libspdk_iscsi.so 00:03:59.799 LIB libspdk_ftl.a 00:04:00.368 SO libspdk_ftl.so.9.0 00:04:00.625 SYMLINK libspdk_ftl.so 00:04:00.882 LIB libspdk_nvmf.a 00:04:01.141 SO libspdk_nvmf.so.20.0 00:04:01.400 SYMLINK libspdk_nvmf.so 00:04:01.660 CC module/env_dpdk/env_dpdk_rpc.o 00:04:01.660 CC module/scheduler/gscheduler/gscheduler.o 00:04:01.660 CC module/sock/posix/posix.o 00:04:01.919 CC module/keyring/file/keyring.o 00:04:01.919 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:01.919 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:01.919 CC module/accel/ioat/accel_ioat.o 00:04:01.919 CC module/blob/bdev/blob_bdev.o 00:04:01.919 CC module/fsdev/aio/fsdev_aio.o 00:04:01.919 CC module/accel/error/accel_error.o 00:04:01.919 LIB libspdk_env_dpdk_rpc.a 00:04:01.919 SO libspdk_env_dpdk_rpc.so.6.0 00:04:01.919 LIB libspdk_scheduler_gscheduler.a 00:04:01.919 SYMLINK libspdk_env_dpdk_rpc.so 00:04:01.919 CC module/keyring/file/keyring_rpc.o 00:04:01.919 LIB libspdk_scheduler_dpdk_governor.a 00:04:01.919 CC module/accel/ioat/accel_ioat_rpc.o 00:04:01.919 SO libspdk_scheduler_gscheduler.so.4.0 00:04:01.919 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:01.919 CC module/accel/error/accel_error_rpc.o 00:04:01.919 LIB libspdk_scheduler_dynamic.a 00:04:01.919 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:01.919 SO libspdk_scheduler_dynamic.so.4.0 00:04:01.919 SYMLINK libspdk_scheduler_gscheduler.so 00:04:01.919 CC module/fsdev/aio/linux_aio_mgr.o 00:04:01.919 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:02.178 LIB libspdk_keyring_file.a 00:04:02.178 SYMLINK libspdk_scheduler_dynamic.so 00:04:02.178 LIB libspdk_blob_bdev.a 00:04:02.178 LIB libspdk_accel_ioat.a 00:04:02.178 SO libspdk_keyring_file.so.2.0 00:04:02.178 LIB libspdk_accel_error.a 00:04:02.178 SO libspdk_blob_bdev.so.11.0 00:04:02.178 SO libspdk_accel_ioat.so.6.0 00:04:02.178 SO libspdk_accel_error.so.2.0 00:04:02.178 SYMLINK libspdk_keyring_file.so 00:04:02.178 SYMLINK libspdk_accel_ioat.so 00:04:02.178 SYMLINK libspdk_blob_bdev.so 00:04:02.178 CC module/keyring/linux/keyring_rpc.o 00:04:02.178 CC module/keyring/linux/keyring.o 00:04:02.178 SYMLINK libspdk_accel_error.so 00:04:02.178 CC module/accel/dsa/accel_dsa.o 00:04:02.178 CC module/accel/dsa/accel_dsa_rpc.o 00:04:02.178 CC module/accel/iaa/accel_iaa.o 00:04:02.178 CC module/accel/iaa/accel_iaa_rpc.o 00:04:02.437 LIB libspdk_keyring_linux.a 00:04:02.437 SO libspdk_keyring_linux.so.1.0 00:04:02.437 CC module/bdev/delay/vbdev_delay.o 00:04:02.437 CC module/blobfs/bdev/blobfs_bdev.o 00:04:02.437 LIB libspdk_accel_iaa.a 00:04:02.437 SYMLINK libspdk_keyring_linux.so 00:04:02.437 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:02.437 CC module/bdev/error/vbdev_error.o 00:04:02.437 SO libspdk_accel_iaa.so.3.0 00:04:02.437 LIB libspdk_fsdev_aio.a 00:04:02.697 LIB libspdk_accel_dsa.a 00:04:02.697 SO libspdk_fsdev_aio.so.1.0 00:04:02.697 LIB libspdk_sock_posix.a 00:04:02.697 SO libspdk_accel_dsa.so.5.0 00:04:02.697 CC module/bdev/lvol/vbdev_lvol.o 00:04:02.697 SYMLINK libspdk_accel_iaa.so 00:04:02.697 CC module/bdev/gpt/gpt.o 00:04:02.697 SO libspdk_sock_posix.so.6.0 00:04:02.697 CC module/bdev/error/vbdev_error_rpc.o 00:04:02.697 SYMLINK libspdk_accel_dsa.so 00:04:02.697 SYMLINK libspdk_fsdev_aio.so 00:04:02.697 LIB libspdk_blobfs_bdev.a 00:04:02.697 SYMLINK libspdk_sock_posix.so 00:04:02.697 SO libspdk_blobfs_bdev.so.6.0 00:04:02.697 SYMLINK libspdk_blobfs_bdev.so 00:04:02.958 CC module/bdev/gpt/vbdev_gpt.o 00:04:02.958 CC module/bdev/malloc/bdev_malloc.o 00:04:02.958 LIB libspdk_bdev_error.a 00:04:02.958 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:02.958 CC module/bdev/null/bdev_null.o 00:04:02.958 CC module/bdev/nvme/bdev_nvme.o 00:04:02.958 SO libspdk_bdev_error.so.6.0 00:04:02.958 CC module/bdev/passthru/vbdev_passthru.o 00:04:02.958 SYMLINK libspdk_bdev_error.so 00:04:02.958 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:02.958 CC module/bdev/raid/bdev_raid.o 00:04:02.958 CC module/bdev/split/vbdev_split.o 00:04:02.958 LIB libspdk_bdev_delay.a 00:04:02.958 SO libspdk_bdev_delay.so.6.0 00:04:03.218 CC module/bdev/split/vbdev_split_rpc.o 00:04:03.218 LIB libspdk_bdev_gpt.a 00:04:03.218 SYMLINK libspdk_bdev_delay.so 00:04:03.218 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:03.218 CC module/bdev/null/bdev_null_rpc.o 00:04:03.218 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:03.218 SO libspdk_bdev_gpt.so.6.0 00:04:03.218 CC module/bdev/nvme/nvme_rpc.o 00:04:03.218 SYMLINK libspdk_bdev_gpt.so 00:04:03.218 CC module/bdev/nvme/bdev_mdns_client.o 00:04:03.218 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:03.218 LIB libspdk_bdev_passthru.a 00:04:03.218 LIB libspdk_bdev_split.a 00:04:03.218 LIB libspdk_bdev_null.a 00:04:03.218 SO libspdk_bdev_passthru.so.6.0 00:04:03.218 SO libspdk_bdev_split.so.6.0 00:04:03.218 SO libspdk_bdev_null.so.6.0 00:04:03.478 SYMLINK libspdk_bdev_passthru.so 00:04:03.478 CC module/bdev/raid/bdev_raid_rpc.o 00:04:03.478 CC module/bdev/raid/bdev_raid_sb.o 00:04:03.478 SYMLINK libspdk_bdev_split.so 00:04:03.478 CC module/bdev/raid/raid0.o 00:04:03.478 SYMLINK libspdk_bdev_null.so 00:04:03.478 CC module/bdev/raid/raid1.o 00:04:03.478 LIB libspdk_bdev_malloc.a 00:04:03.478 CC module/bdev/nvme/vbdev_opal.o 00:04:03.478 SO libspdk_bdev_malloc.so.6.0 00:04:03.478 SYMLINK libspdk_bdev_malloc.so 00:04:03.478 LIB libspdk_bdev_lvol.a 00:04:03.478 SO libspdk_bdev_lvol.so.6.0 00:04:03.737 CC module/bdev/raid/concat.o 00:04:03.737 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:03.737 SYMLINK libspdk_bdev_lvol.so 00:04:03.737 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:03.737 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:03.737 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:03.996 CC module/bdev/xnvme/bdev_xnvme.o 00:04:03.996 CC module/bdev/aio/bdev_aio.o 00:04:03.996 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:03.996 CC module/bdev/aio/bdev_aio_rpc.o 00:04:03.996 CC module/bdev/ftl/bdev_ftl.o 00:04:03.996 CC module/bdev/iscsi/bdev_iscsi.o 00:04:03.996 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:03.996 LIB libspdk_bdev_zone_block.a 00:04:03.996 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:03.996 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:03.996 SO libspdk_bdev_zone_block.so.6.0 00:04:03.996 LIB libspdk_bdev_raid.a 00:04:04.255 SYMLINK libspdk_bdev_zone_block.so 00:04:04.255 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:04.255 SO libspdk_bdev_raid.so.6.0 00:04:04.255 LIB libspdk_bdev_aio.a 00:04:04.255 LIB libspdk_bdev_xnvme.a 00:04:04.255 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:04.255 SO libspdk_bdev_aio.so.6.0 00:04:04.255 LIB libspdk_bdev_ftl.a 00:04:04.255 SO libspdk_bdev_xnvme.so.3.0 00:04:04.255 SYMLINK libspdk_bdev_raid.so 00:04:04.255 SO libspdk_bdev_ftl.so.6.0 00:04:04.255 SYMLINK libspdk_bdev_aio.so 00:04:04.514 SYMLINK libspdk_bdev_xnvme.so 00:04:04.514 SYMLINK libspdk_bdev_ftl.so 00:04:04.514 LIB libspdk_bdev_iscsi.a 00:04:04.514 SO libspdk_bdev_iscsi.so.6.0 00:04:04.514 SYMLINK libspdk_bdev_iscsi.so 00:04:04.514 LIB libspdk_bdev_virtio.a 00:04:04.773 SO libspdk_bdev_virtio.so.6.0 00:04:04.773 SYMLINK libspdk_bdev_virtio.so 00:04:05.711 LIB libspdk_bdev_nvme.a 00:04:05.711 SO libspdk_bdev_nvme.so.7.1 00:04:05.970 SYMLINK libspdk_bdev_nvme.so 00:04:06.551 CC module/event/subsystems/scheduler/scheduler.o 00:04:06.551 CC module/event/subsystems/sock/sock.o 00:04:06.551 CC module/event/subsystems/iobuf/iobuf.o 00:04:06.551 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:06.551 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:06.551 CC module/event/subsystems/keyring/keyring.o 00:04:06.551 CC module/event/subsystems/vmd/vmd.o 00:04:06.551 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:06.551 CC module/event/subsystems/fsdev/fsdev.o 00:04:06.811 LIB libspdk_event_sock.a 00:04:06.811 LIB libspdk_event_vhost_blk.a 00:04:06.811 LIB libspdk_event_keyring.a 00:04:06.811 LIB libspdk_event_scheduler.a 00:04:06.811 LIB libspdk_event_fsdev.a 00:04:06.811 LIB libspdk_event_vmd.a 00:04:06.811 SO libspdk_event_vhost_blk.so.3.0 00:04:06.811 SO libspdk_event_sock.so.5.0 00:04:06.811 SO libspdk_event_keyring.so.1.0 00:04:06.811 SO libspdk_event_scheduler.so.4.0 00:04:06.811 SO libspdk_event_fsdev.so.1.0 00:04:06.811 LIB libspdk_event_iobuf.a 00:04:06.811 SO libspdk_event_vmd.so.6.0 00:04:06.811 SYMLINK libspdk_event_keyring.so 00:04:06.811 SYMLINK libspdk_event_vhost_blk.so 00:04:06.811 SYMLINK libspdk_event_sock.so 00:04:06.811 SO libspdk_event_iobuf.so.3.0 00:04:06.811 SYMLINK libspdk_event_fsdev.so 00:04:06.811 SYMLINK libspdk_event_scheduler.so 00:04:06.811 SYMLINK libspdk_event_vmd.so 00:04:06.811 SYMLINK libspdk_event_iobuf.so 00:04:07.380 CC module/event/subsystems/accel/accel.o 00:04:07.380 LIB libspdk_event_accel.a 00:04:07.380 SO libspdk_event_accel.so.6.0 00:04:07.639 SYMLINK libspdk_event_accel.so 00:04:07.898 CC module/event/subsystems/bdev/bdev.o 00:04:08.156 LIB libspdk_event_bdev.a 00:04:08.156 SO libspdk_event_bdev.so.6.0 00:04:08.416 SYMLINK libspdk_event_bdev.so 00:04:08.675 CC module/event/subsystems/nbd/nbd.o 00:04:08.675 CC module/event/subsystems/scsi/scsi.o 00:04:08.675 CC module/event/subsystems/ublk/ublk.o 00:04:08.675 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:08.675 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:08.675 LIB libspdk_event_nbd.a 00:04:08.934 SO libspdk_event_nbd.so.6.0 00:04:08.935 LIB libspdk_event_ublk.a 00:04:08.935 LIB libspdk_event_scsi.a 00:04:08.935 SO libspdk_event_ublk.so.3.0 00:04:08.935 SYMLINK libspdk_event_nbd.so 00:04:08.935 SO libspdk_event_scsi.so.6.0 00:04:08.935 SYMLINK libspdk_event_ublk.so 00:04:08.935 LIB libspdk_event_nvmf.a 00:04:08.935 SYMLINK libspdk_event_scsi.so 00:04:08.935 SO libspdk_event_nvmf.so.6.0 00:04:08.935 SYMLINK libspdk_event_nvmf.so 00:04:09.503 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:09.503 CC module/event/subsystems/iscsi/iscsi.o 00:04:09.503 LIB libspdk_event_vhost_scsi.a 00:04:09.503 LIB libspdk_event_iscsi.a 00:04:09.503 SO libspdk_event_vhost_scsi.so.3.0 00:04:09.503 SO libspdk_event_iscsi.so.6.0 00:04:09.503 SYMLINK libspdk_event_vhost_scsi.so 00:04:09.503 SYMLINK libspdk_event_iscsi.so 00:04:09.762 SO libspdk.so.6.0 00:04:09.762 SYMLINK libspdk.so 00:04:10.021 CXX app/trace/trace.o 00:04:10.280 CC app/trace_record/trace_record.o 00:04:10.280 CC app/spdk_lspci/spdk_lspci.o 00:04:10.280 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:10.280 CC app/iscsi_tgt/iscsi_tgt.o 00:04:10.280 CC app/nvmf_tgt/nvmf_main.o 00:04:10.280 CC app/spdk_tgt/spdk_tgt.o 00:04:10.280 CC examples/util/zipf/zipf.o 00:04:10.280 CC examples/ioat/perf/perf.o 00:04:10.280 CC test/thread/poller_perf/poller_perf.o 00:04:10.280 LINK spdk_lspci 00:04:10.280 LINK interrupt_tgt 00:04:10.280 LINK nvmf_tgt 00:04:10.280 LINK iscsi_tgt 00:04:10.280 LINK zipf 00:04:10.280 LINK poller_perf 00:04:10.539 LINK spdk_tgt 00:04:10.539 LINK spdk_trace_record 00:04:10.539 LINK ioat_perf 00:04:10.539 LINK spdk_trace 00:04:10.539 CC app/spdk_nvme_perf/perf.o 00:04:10.539 CC app/spdk_nvme_discover/discovery_aer.o 00:04:10.539 CC app/spdk_nvme_identify/identify.o 00:04:10.798 CC examples/ioat/verify/verify.o 00:04:10.798 CC app/spdk_top/spdk_top.o 00:04:10.798 CC app/spdk_dd/spdk_dd.o 00:04:10.798 CC test/dma/test_dma/test_dma.o 00:04:10.798 CC test/app/bdev_svc/bdev_svc.o 00:04:10.798 CC examples/thread/thread/thread_ex.o 00:04:10.798 LINK spdk_nvme_discover 00:04:10.798 CC app/fio/nvme/fio_plugin.o 00:04:11.058 LINK verify 00:04:11.058 LINK bdev_svc 00:04:11.058 LINK thread 00:04:11.058 CC app/fio/bdev/fio_plugin.o 00:04:11.058 LINK spdk_dd 00:04:11.317 CC app/vhost/vhost.o 00:04:11.317 LINK test_dma 00:04:11.317 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:11.577 LINK vhost 00:04:11.577 LINK spdk_nvme 00:04:11.577 CC test/app/histogram_perf/histogram_perf.o 00:04:11.577 CC examples/sock/hello_world/hello_sock.o 00:04:11.577 LINK spdk_nvme_perf 00:04:11.577 CC test/app/jsoncat/jsoncat.o 00:04:11.577 LINK spdk_bdev 00:04:11.577 LINK spdk_nvme_identify 00:04:11.577 LINK histogram_perf 00:04:11.577 LINK spdk_top 00:04:11.577 CC test/app/stub/stub.o 00:04:11.836 LINK jsoncat 00:04:11.836 LINK nvme_fuzz 00:04:11.836 CC examples/vmd/lsvmd/lsvmd.o 00:04:11.836 LINK hello_sock 00:04:11.836 TEST_HEADER include/spdk/accel.h 00:04:11.836 CC examples/vmd/led/led.o 00:04:11.836 TEST_HEADER include/spdk/accel_module.h 00:04:11.836 TEST_HEADER include/spdk/assert.h 00:04:11.836 TEST_HEADER include/spdk/barrier.h 00:04:11.836 TEST_HEADER include/spdk/base64.h 00:04:11.836 TEST_HEADER include/spdk/bdev.h 00:04:11.836 TEST_HEADER include/spdk/bdev_module.h 00:04:11.836 TEST_HEADER include/spdk/bdev_zone.h 00:04:11.836 TEST_HEADER include/spdk/bit_array.h 00:04:11.836 TEST_HEADER include/spdk/bit_pool.h 00:04:11.836 TEST_HEADER include/spdk/blob_bdev.h 00:04:11.836 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:11.836 TEST_HEADER include/spdk/blobfs.h 00:04:11.836 TEST_HEADER include/spdk/blob.h 00:04:11.836 LINK stub 00:04:11.836 TEST_HEADER include/spdk/conf.h 00:04:11.836 TEST_HEADER include/spdk/config.h 00:04:11.836 TEST_HEADER include/spdk/cpuset.h 00:04:11.836 TEST_HEADER include/spdk/crc16.h 00:04:11.836 TEST_HEADER include/spdk/crc32.h 00:04:11.836 TEST_HEADER include/spdk/crc64.h 00:04:11.836 TEST_HEADER include/spdk/dif.h 00:04:11.836 TEST_HEADER include/spdk/dma.h 00:04:11.836 TEST_HEADER include/spdk/endian.h 00:04:11.836 TEST_HEADER include/spdk/env_dpdk.h 00:04:11.836 TEST_HEADER include/spdk/env.h 00:04:11.836 TEST_HEADER include/spdk/event.h 00:04:11.836 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:11.836 TEST_HEADER include/spdk/fd_group.h 00:04:11.836 TEST_HEADER include/spdk/fd.h 00:04:11.836 TEST_HEADER include/spdk/file.h 00:04:11.836 TEST_HEADER include/spdk/fsdev.h 00:04:11.836 TEST_HEADER include/spdk/fsdev_module.h 00:04:11.836 TEST_HEADER include/spdk/ftl.h 00:04:11.836 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:11.836 TEST_HEADER include/spdk/gpt_spec.h 00:04:11.836 TEST_HEADER include/spdk/hexlify.h 00:04:11.836 TEST_HEADER include/spdk/histogram_data.h 00:04:11.836 TEST_HEADER include/spdk/idxd.h 00:04:11.836 TEST_HEADER include/spdk/idxd_spec.h 00:04:11.836 TEST_HEADER include/spdk/init.h 00:04:11.836 TEST_HEADER include/spdk/ioat.h 00:04:11.836 TEST_HEADER include/spdk/ioat_spec.h 00:04:11.836 TEST_HEADER include/spdk/iscsi_spec.h 00:04:11.836 TEST_HEADER include/spdk/json.h 00:04:12.096 TEST_HEADER include/spdk/jsonrpc.h 00:04:12.096 TEST_HEADER include/spdk/keyring.h 00:04:12.096 LINK lsvmd 00:04:12.096 TEST_HEADER include/spdk/keyring_module.h 00:04:12.096 TEST_HEADER include/spdk/likely.h 00:04:12.096 TEST_HEADER include/spdk/log.h 00:04:12.096 TEST_HEADER include/spdk/lvol.h 00:04:12.096 TEST_HEADER include/spdk/md5.h 00:04:12.096 TEST_HEADER include/spdk/memory.h 00:04:12.096 TEST_HEADER include/spdk/mmio.h 00:04:12.096 TEST_HEADER include/spdk/nbd.h 00:04:12.096 TEST_HEADER include/spdk/net.h 00:04:12.096 TEST_HEADER include/spdk/notify.h 00:04:12.096 TEST_HEADER include/spdk/nvme.h 00:04:12.096 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:12.096 TEST_HEADER include/spdk/nvme_intel.h 00:04:12.096 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:12.096 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:12.096 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:12.096 TEST_HEADER include/spdk/nvme_spec.h 00:04:12.096 TEST_HEADER include/spdk/nvme_zns.h 00:04:12.096 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:12.096 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:12.096 TEST_HEADER include/spdk/nvmf.h 00:04:12.096 TEST_HEADER include/spdk/nvmf_spec.h 00:04:12.096 TEST_HEADER include/spdk/nvmf_transport.h 00:04:12.096 TEST_HEADER include/spdk/opal.h 00:04:12.096 TEST_HEADER include/spdk/opal_spec.h 00:04:12.096 TEST_HEADER include/spdk/pci_ids.h 00:04:12.096 LINK led 00:04:12.096 TEST_HEADER include/spdk/pipe.h 00:04:12.096 TEST_HEADER include/spdk/queue.h 00:04:12.096 TEST_HEADER include/spdk/reduce.h 00:04:12.096 TEST_HEADER include/spdk/rpc.h 00:04:12.096 TEST_HEADER include/spdk/scheduler.h 00:04:12.096 TEST_HEADER include/spdk/scsi.h 00:04:12.096 CC examples/idxd/perf/perf.o 00:04:12.096 TEST_HEADER include/spdk/scsi_spec.h 00:04:12.096 TEST_HEADER include/spdk/sock.h 00:04:12.096 TEST_HEADER include/spdk/stdinc.h 00:04:12.096 TEST_HEADER include/spdk/string.h 00:04:12.096 TEST_HEADER include/spdk/thread.h 00:04:12.096 TEST_HEADER include/spdk/trace.h 00:04:12.096 TEST_HEADER include/spdk/trace_parser.h 00:04:12.096 TEST_HEADER include/spdk/tree.h 00:04:12.096 TEST_HEADER include/spdk/ublk.h 00:04:12.096 TEST_HEADER include/spdk/util.h 00:04:12.096 TEST_HEADER include/spdk/uuid.h 00:04:12.096 TEST_HEADER include/spdk/version.h 00:04:12.096 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:12.096 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:12.096 TEST_HEADER include/spdk/vhost.h 00:04:12.096 TEST_HEADER include/spdk/vmd.h 00:04:12.096 TEST_HEADER include/spdk/xor.h 00:04:12.096 TEST_HEADER include/spdk/zipf.h 00:04:12.096 CXX test/cpp_headers/accel.o 00:04:12.096 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:12.096 CC test/env/vtophys/vtophys.o 00:04:12.096 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:12.355 CC test/event/event_perf/event_perf.o 00:04:12.355 CXX test/cpp_headers/accel_module.o 00:04:12.355 CC test/env/mem_callbacks/mem_callbacks.o 00:04:12.355 LINK vtophys 00:04:12.355 CC examples/accel/perf/accel_perf.o 00:04:12.355 LINK env_dpdk_post_init 00:04:12.355 LINK event_perf 00:04:12.355 LINK hello_fsdev 00:04:12.355 LINK idxd_perf 00:04:12.355 CXX test/cpp_headers/assert.o 00:04:12.355 LINK vhost_fuzz 00:04:12.615 CXX test/cpp_headers/barrier.o 00:04:12.615 CC test/env/memory/memory_ut.o 00:04:12.615 CC test/event/reactor/reactor.o 00:04:12.615 CC test/env/pci/pci_ut.o 00:04:12.874 CXX test/cpp_headers/base64.o 00:04:12.874 CC examples/nvme/hello_world/hello_world.o 00:04:12.874 LINK mem_callbacks 00:04:12.874 LINK reactor 00:04:12.874 CC examples/blob/hello_world/hello_blob.o 00:04:12.874 CC test/nvme/aer/aer.o 00:04:12.874 LINK accel_perf 00:04:12.874 CXX test/cpp_headers/bdev.o 00:04:13.133 LINK hello_world 00:04:13.133 CC test/nvme/reset/reset.o 00:04:13.133 CC test/event/reactor_perf/reactor_perf.o 00:04:13.133 LINK hello_blob 00:04:13.133 CXX test/cpp_headers/bdev_module.o 00:04:13.133 LINK aer 00:04:13.133 LINK pci_ut 00:04:13.133 CC test/rpc_client/rpc_client_test.o 00:04:13.133 LINK reactor_perf 00:04:13.133 CC examples/nvme/reconnect/reconnect.o 00:04:13.392 CXX test/cpp_headers/bdev_zone.o 00:04:13.392 LINK reset 00:04:13.392 CC examples/blob/cli/blobcli.o 00:04:13.392 LINK rpc_client_test 00:04:13.392 CC test/event/app_repeat/app_repeat.o 00:04:13.392 CC test/accel/dif/dif.o 00:04:13.392 CXX test/cpp_headers/bit_array.o 00:04:13.392 CXX test/cpp_headers/bit_pool.o 00:04:13.653 CXX test/cpp_headers/blob_bdev.o 00:04:13.653 CC test/nvme/sgl/sgl.o 00:04:13.653 LINK app_repeat 00:04:13.653 LINK reconnect 00:04:13.653 LINK memory_ut 00:04:13.653 CXX test/cpp_headers/blobfs_bdev.o 00:04:13.912 LINK sgl 00:04:13.912 CC test/blobfs/mkfs/mkfs.o 00:04:13.912 CC test/event/scheduler/scheduler.o 00:04:13.912 CXX test/cpp_headers/blobfs.o 00:04:13.912 LINK blobcli 00:04:13.912 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:13.912 CXX test/cpp_headers/blob.o 00:04:13.912 CC test/lvol/esnap/esnap.o 00:04:13.912 LINK iscsi_fuzz 00:04:13.912 LINK mkfs 00:04:14.171 CXX test/cpp_headers/conf.o 00:04:14.171 LINK scheduler 00:04:14.171 CC test/nvme/e2edp/nvme_dp.o 00:04:14.171 CC examples/nvme/arbitration/arbitration.o 00:04:14.171 CC test/nvme/overhead/overhead.o 00:04:14.171 LINK dif 00:04:14.171 CXX test/cpp_headers/config.o 00:04:14.171 CXX test/cpp_headers/cpuset.o 00:04:14.171 CC test/nvme/err_injection/err_injection.o 00:04:14.431 CC test/nvme/startup/startup.o 00:04:14.431 CC examples/nvme/hotplug/hotplug.o 00:04:14.431 LINK nvme_dp 00:04:14.431 CXX test/cpp_headers/crc16.o 00:04:14.431 LINK overhead 00:04:14.431 LINK nvme_manage 00:04:14.431 LINK startup 00:04:14.431 LINK err_injection 00:04:14.431 CC test/nvme/reserve/reserve.o 00:04:14.431 LINK arbitration 00:04:14.690 LINK hotplug 00:04:14.690 CXX test/cpp_headers/crc32.o 00:04:14.690 CXX test/cpp_headers/crc64.o 00:04:14.690 CC test/bdev/bdevio/bdevio.o 00:04:14.690 LINK reserve 00:04:14.690 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:14.690 CC examples/nvme/abort/abort.o 00:04:14.690 CC test/nvme/simple_copy/simple_copy.o 00:04:14.690 CXX test/cpp_headers/dif.o 00:04:14.949 CC test/nvme/connect_stress/connect_stress.o 00:04:14.949 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:14.949 CC test/nvme/boot_partition/boot_partition.o 00:04:14.949 CXX test/cpp_headers/dma.o 00:04:14.949 LINK cmb_copy 00:04:14.949 CC test/nvme/compliance/nvme_compliance.o 00:04:14.949 LINK pmr_persistence 00:04:14.949 LINK simple_copy 00:04:14.949 LINK boot_partition 00:04:14.949 LINK connect_stress 00:04:15.207 CXX test/cpp_headers/endian.o 00:04:15.207 LINK bdevio 00:04:15.207 LINK abort 00:04:15.207 CC test/nvme/fused_ordering/fused_ordering.o 00:04:15.207 CXX test/cpp_headers/env_dpdk.o 00:04:15.207 CXX test/cpp_headers/env.o 00:04:15.207 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:15.466 CXX test/cpp_headers/event.o 00:04:15.466 CC examples/bdev/hello_world/hello_bdev.o 00:04:15.466 LINK nvme_compliance 00:04:15.466 CXX test/cpp_headers/fd_group.o 00:04:15.466 CC test/nvme/fdp/fdp.o 00:04:15.466 CC examples/bdev/bdevperf/bdevperf.o 00:04:15.466 LINK fused_ordering 00:04:15.466 LINK doorbell_aers 00:04:15.466 CC test/nvme/cuse/cuse.o 00:04:15.467 CXX test/cpp_headers/fd.o 00:04:15.467 CXX test/cpp_headers/file.o 00:04:15.467 CXX test/cpp_headers/fsdev.o 00:04:15.725 LINK hello_bdev 00:04:15.725 CXX test/cpp_headers/fsdev_module.o 00:04:15.725 CXX test/cpp_headers/ftl.o 00:04:15.725 CXX test/cpp_headers/fuse_dispatcher.o 00:04:15.725 CXX test/cpp_headers/gpt_spec.o 00:04:15.725 CXX test/cpp_headers/hexlify.o 00:04:15.725 CXX test/cpp_headers/histogram_data.o 00:04:15.725 LINK fdp 00:04:15.725 CXX test/cpp_headers/idxd.o 00:04:15.725 CXX test/cpp_headers/idxd_spec.o 00:04:15.985 CXX test/cpp_headers/init.o 00:04:15.985 CXX test/cpp_headers/ioat.o 00:04:15.985 CXX test/cpp_headers/ioat_spec.o 00:04:15.985 CXX test/cpp_headers/iscsi_spec.o 00:04:15.985 CXX test/cpp_headers/json.o 00:04:15.985 CXX test/cpp_headers/jsonrpc.o 00:04:15.985 CXX test/cpp_headers/keyring.o 00:04:15.985 CXX test/cpp_headers/keyring_module.o 00:04:15.985 CXX test/cpp_headers/likely.o 00:04:15.985 CXX test/cpp_headers/log.o 00:04:15.985 CXX test/cpp_headers/lvol.o 00:04:16.243 CXX test/cpp_headers/md5.o 00:04:16.243 CXX test/cpp_headers/memory.o 00:04:16.243 CXX test/cpp_headers/mmio.o 00:04:16.243 CXX test/cpp_headers/nbd.o 00:04:16.243 CXX test/cpp_headers/net.o 00:04:16.243 CXX test/cpp_headers/notify.o 00:04:16.243 CXX test/cpp_headers/nvme.o 00:04:16.243 CXX test/cpp_headers/nvme_intel.o 00:04:16.243 CXX test/cpp_headers/nvme_ocssd.o 00:04:16.243 LINK bdevperf 00:04:16.243 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:16.243 CXX test/cpp_headers/nvme_spec.o 00:04:16.501 CXX test/cpp_headers/nvme_zns.o 00:04:16.501 CXX test/cpp_headers/nvmf_cmd.o 00:04:16.501 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:16.501 CXX test/cpp_headers/nvmf.o 00:04:16.501 CXX test/cpp_headers/nvmf_spec.o 00:04:16.501 CXX test/cpp_headers/nvmf_transport.o 00:04:16.501 CXX test/cpp_headers/opal.o 00:04:16.501 CXX test/cpp_headers/opal_spec.o 00:04:16.501 CXX test/cpp_headers/pci_ids.o 00:04:16.501 CXX test/cpp_headers/pipe.o 00:04:16.760 CXX test/cpp_headers/queue.o 00:04:16.760 CXX test/cpp_headers/reduce.o 00:04:16.760 CXX test/cpp_headers/rpc.o 00:04:16.760 CXX test/cpp_headers/scheduler.o 00:04:16.760 CXX test/cpp_headers/scsi.o 00:04:16.760 CC examples/nvmf/nvmf/nvmf.o 00:04:16.760 CXX test/cpp_headers/scsi_spec.o 00:04:16.760 CXX test/cpp_headers/sock.o 00:04:16.760 CXX test/cpp_headers/stdinc.o 00:04:16.760 CXX test/cpp_headers/string.o 00:04:16.760 CXX test/cpp_headers/thread.o 00:04:16.760 CXX test/cpp_headers/trace.o 00:04:17.025 LINK cuse 00:04:17.025 CXX test/cpp_headers/trace_parser.o 00:04:17.025 CXX test/cpp_headers/tree.o 00:04:17.025 CXX test/cpp_headers/ublk.o 00:04:17.025 CXX test/cpp_headers/util.o 00:04:17.025 CXX test/cpp_headers/uuid.o 00:04:17.025 CXX test/cpp_headers/version.o 00:04:17.025 CXX test/cpp_headers/vfio_user_pci.o 00:04:17.025 CXX test/cpp_headers/vfio_user_spec.o 00:04:17.025 CXX test/cpp_headers/vhost.o 00:04:17.025 LINK nvmf 00:04:17.025 CXX test/cpp_headers/vmd.o 00:04:17.025 CXX test/cpp_headers/xor.o 00:04:17.025 CXX test/cpp_headers/zipf.o 00:04:20.315 LINK esnap 00:04:20.574 00:04:20.574 real 1m27.394s 00:04:20.574 user 7m20.445s 00:04:20.574 sys 1m58.347s 00:04:20.574 03:14:44 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:20.574 03:14:44 make -- common/autotest_common.sh@10 -- $ set +x 00:04:20.574 ************************************ 00:04:20.574 END TEST make 00:04:20.574 ************************************ 00:04:20.574 03:14:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:20.574 03:14:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:20.574 03:14:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:20.574 03:14:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.574 03:14:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:20.834 03:14:44 -- pm/common@44 -- $ pid=5302 00:04:20.834 03:14:44 -- pm/common@50 -- $ kill -TERM 5302 00:04:20.834 03:14:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.834 03:14:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:20.834 03:14:44 -- pm/common@44 -- $ pid=5304 00:04:20.834 03:14:44 -- pm/common@50 -- $ kill -TERM 5304 00:04:20.834 03:14:44 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:20.834 03:14:44 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:20.834 03:14:44 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:20.834 03:14:44 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:20.834 03:14:44 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:20.834 03:14:44 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:20.834 03:14:44 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.834 03:14:44 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.834 03:14:44 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.834 03:14:44 -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.834 03:14:44 -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.834 03:14:44 -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.834 03:14:44 -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.834 03:14:44 -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.834 03:14:44 -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.834 03:14:44 -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.834 03:14:44 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.834 03:14:44 -- scripts/common.sh@344 -- # case "$op" in 00:04:20.834 03:14:44 -- scripts/common.sh@345 -- # : 1 00:04:20.834 03:14:44 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.834 03:14:44 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.834 03:14:44 -- scripts/common.sh@365 -- # decimal 1 00:04:20.834 03:14:44 -- scripts/common.sh@353 -- # local d=1 00:04:20.834 03:14:44 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.834 03:14:44 -- scripts/common.sh@355 -- # echo 1 00:04:20.834 03:14:44 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.834 03:14:44 -- scripts/common.sh@366 -- # decimal 2 00:04:20.834 03:14:44 -- scripts/common.sh@353 -- # local d=2 00:04:20.834 03:14:44 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.834 03:14:44 -- scripts/common.sh@355 -- # echo 2 00:04:20.834 03:14:44 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.834 03:14:44 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.834 03:14:44 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.834 03:14:44 -- scripts/common.sh@368 -- # return 0 00:04:20.834 03:14:44 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.834 03:14:44 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:20.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.834 --rc genhtml_branch_coverage=1 00:04:20.834 --rc genhtml_function_coverage=1 00:04:20.834 --rc genhtml_legend=1 00:04:20.834 --rc geninfo_all_blocks=1 00:04:20.834 --rc geninfo_unexecuted_blocks=1 00:04:20.834 00:04:20.834 ' 00:04:20.834 03:14:44 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:20.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.834 --rc genhtml_branch_coverage=1 00:04:20.834 --rc genhtml_function_coverage=1 00:04:20.834 --rc genhtml_legend=1 00:04:20.834 --rc geninfo_all_blocks=1 00:04:20.834 --rc geninfo_unexecuted_blocks=1 00:04:20.834 00:04:20.834 ' 00:04:20.834 03:14:44 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:20.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.834 --rc genhtml_branch_coverage=1 00:04:20.834 --rc genhtml_function_coverage=1 00:04:20.834 --rc genhtml_legend=1 00:04:20.834 --rc geninfo_all_blocks=1 00:04:20.834 --rc geninfo_unexecuted_blocks=1 00:04:20.834 00:04:20.834 ' 00:04:20.834 03:14:44 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:20.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.834 --rc genhtml_branch_coverage=1 00:04:20.834 --rc genhtml_function_coverage=1 00:04:20.834 --rc genhtml_legend=1 00:04:20.834 --rc geninfo_all_blocks=1 00:04:20.834 --rc geninfo_unexecuted_blocks=1 00:04:20.834 00:04:20.834 ' 00:04:20.834 03:14:44 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:20.834 03:14:44 -- nvmf/common.sh@7 -- # uname -s 00:04:20.834 03:14:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:20.834 03:14:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:20.834 03:14:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:20.834 03:14:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:20.834 03:14:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:20.834 03:14:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:20.834 03:14:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:20.834 03:14:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:20.834 03:14:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:20.834 03:14:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.103 03:14:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:55c5990d-2614-4b15-ace8-ffb5cf34a72e 00:04:21.103 03:14:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=55c5990d-2614-4b15-ace8-ffb5cf34a72e 00:04:21.103 03:14:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.103 03:14:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.103 03:14:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.103 03:14:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:21.103 03:14:44 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:21.103 03:14:44 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:21.103 03:14:44 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.103 03:14:44 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.103 03:14:44 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.103 03:14:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.103 03:14:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.103 03:14:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.103 03:14:44 -- paths/export.sh@5 -- # export PATH 00:04:21.104 03:14:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.104 03:14:44 -- nvmf/common.sh@51 -- # : 0 00:04:21.104 03:14:44 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:21.104 03:14:44 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:21.104 03:14:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:21.104 03:14:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.104 03:14:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.104 03:14:44 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:21.104 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:21.104 03:14:44 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:21.104 03:14:44 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:21.104 03:14:44 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:21.104 03:14:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:21.104 03:14:44 -- spdk/autotest.sh@32 -- # uname -s 00:04:21.104 03:14:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:21.104 03:14:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:21.104 03:14:44 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:21.104 03:14:44 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:21.104 03:14:44 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:21.104 03:14:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:21.104 03:14:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:21.104 03:14:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:21.104 03:14:44 -- spdk/autotest.sh@48 -- # udevadm_pid=54790 00:04:21.104 03:14:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:21.104 03:14:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:21.104 03:14:44 -- pm/common@17 -- # local monitor 00:04:21.104 03:14:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.104 03:14:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.104 03:14:44 -- pm/common@21 -- # date +%s 00:04:21.104 03:14:44 -- pm/common@25 -- # sleep 1 00:04:21.104 03:14:44 -- pm/common@21 -- # date +%s 00:04:21.104 03:14:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730776484 00:04:21.104 03:14:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730776484 00:04:21.104 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730776484_collect-cpu-load.pm.log 00:04:21.104 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730776484_collect-vmstat.pm.log 00:04:22.062 03:14:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:22.062 03:14:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:22.062 03:14:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.062 03:14:45 -- common/autotest_common.sh@10 -- # set +x 00:04:22.062 03:14:45 -- spdk/autotest.sh@59 -- # create_test_list 00:04:22.062 03:14:45 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:22.062 03:14:45 -- common/autotest_common.sh@10 -- # set +x 00:04:22.062 03:14:45 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:22.062 03:14:45 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:22.062 03:14:45 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:22.062 03:14:45 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:22.062 03:14:45 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:22.062 03:14:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:22.063 03:14:45 -- common/autotest_common.sh@1455 -- # uname 00:04:22.063 03:14:45 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:22.063 03:14:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:22.063 03:14:45 -- common/autotest_common.sh@1475 -- # uname 00:04:22.063 03:14:45 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:22.063 03:14:45 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:22.063 03:14:45 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:22.322 lcov: LCOV version 1.15 00:04:22.322 03:14:45 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:40.412 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:40.412 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:55.305 03:15:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:55.305 03:15:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.305 03:15:16 -- common/autotest_common.sh@10 -- # set +x 00:04:55.305 03:15:16 -- spdk/autotest.sh@78 -- # rm -f 00:04:55.305 03:15:16 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.305 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:55.305 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:55.305 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:55.305 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:55.305 03:15:18 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:55.305 03:15:18 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:55.305 03:15:18 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:55.305 03:15:18 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:55.305 03:15:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:55.305 03:15:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:55.305 03:15:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:55.305 03:15:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:55.305 03:15:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:55.305 03:15:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:55.305 03:15:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:04:55.305 03:15:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:04:55.305 03:15:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:55.305 03:15:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:55.306 03:15:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:55.306 03:15:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:04:55.306 03:15:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:04:55.306 03:15:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:55.306 03:15:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:55.306 03:15:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:55.306 03:15:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:55.306 03:15:18 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:55.306 03:15:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:55.306 03:15:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:55.306 03:15:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:55.306 03:15:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:55.306 03:15:18 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:55.306 03:15:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:55.306 03:15:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:55.306 03:15:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:55.306 03:15:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:55.306 03:15:18 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:55.306 03:15:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:55.306 03:15:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:55.306 03:15:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:55.306 03:15:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:55.306 03:15:18 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:55.306 03:15:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:55.306 03:15:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:55.306 03:15:18 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:55.306 03:15:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:55.306 03:15:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:55.306 03:15:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:55.306 03:15:18 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:55.306 03:15:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:55.306 No valid GPT data, bailing 00:04:55.306 03:15:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:55.306 03:15:18 -- scripts/common.sh@394 -- # pt= 00:04:55.306 03:15:18 -- scripts/common.sh@395 -- # return 1 00:04:55.306 03:15:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:55.306 1+0 records in 00:04:55.306 1+0 records out 00:04:55.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485059 s, 216 MB/s 00:04:55.306 03:15:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:55.306 03:15:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:55.306 03:15:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:04:55.306 03:15:18 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:04:55.306 03:15:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:04:55.306 No valid GPT data, bailing 00:04:55.306 03:15:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:55.306 03:15:18 -- scripts/common.sh@394 -- # pt= 00:04:55.306 03:15:18 -- scripts/common.sh@395 -- # return 1 00:04:55.306 03:15:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:04:55.306 1+0 records in 00:04:55.306 1+0 records out 00:04:55.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00640262 s, 164 MB/s 00:04:55.306 03:15:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:55.306 03:15:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:55.306 03:15:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:04:55.306 03:15:18 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:04:55.306 03:15:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:04:55.306 No valid GPT data, bailing 00:04:55.306 03:15:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:55.306 03:15:18 -- scripts/common.sh@394 -- # pt= 00:04:55.306 03:15:18 -- scripts/common.sh@395 -- # return 1 00:04:55.306 03:15:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:04:55.306 1+0 records in 00:04:55.306 1+0 records out 00:04:55.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00674201 s, 156 MB/s 00:04:55.306 03:15:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:55.306 03:15:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:55.306 03:15:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:55.306 03:15:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:55.306 03:15:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:55.306 No valid GPT data, bailing 00:04:55.306 03:15:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:55.306 03:15:18 -- scripts/common.sh@394 -- # pt= 00:04:55.306 03:15:18 -- scripts/common.sh@395 -- # return 1 00:04:55.306 03:15:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:55.306 1+0 records in 00:04:55.306 1+0 records out 00:04:55.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144583 s, 72.5 MB/s 00:04:55.306 03:15:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:55.306 03:15:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:55.306 03:15:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:55.306 03:15:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:55.306 03:15:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:55.306 No valid GPT data, bailing 00:04:55.306 03:15:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:55.306 03:15:18 -- scripts/common.sh@394 -- # pt= 00:04:55.306 03:15:18 -- scripts/common.sh@395 -- # return 1 00:04:55.306 03:15:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:55.306 1+0 records in 00:04:55.306 1+0 records out 00:04:55.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00815433 s, 129 MB/s 00:04:55.306 03:15:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:55.306 03:15:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:55.306 03:15:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:55.306 03:15:18 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:55.306 03:15:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:55.306 No valid GPT data, bailing 00:04:55.306 03:15:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:55.306 03:15:18 -- scripts/common.sh@394 -- # pt= 00:04:55.306 03:15:18 -- scripts/common.sh@395 -- # return 1 00:04:55.306 03:15:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:55.306 1+0 records in 00:04:55.306 1+0 records out 00:04:55.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00657319 s, 160 MB/s 00:04:55.306 03:15:18 -- spdk/autotest.sh@105 -- # sync 00:04:55.306 03:15:18 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:55.306 03:15:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:55.306 03:15:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:58.679 03:15:21 -- spdk/autotest.sh@111 -- # uname -s 00:04:58.679 03:15:21 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:58.679 03:15:21 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:58.679 03:15:21 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:59.248 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.817 Hugepages 00:04:59.817 node hugesize free / total 00:04:59.818 node0 1048576kB 0 / 0 00:04:59.818 node0 2048kB 0 / 0 00:04:59.818 00:04:59.818 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:59.818 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:59.818 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:00.077 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:00.077 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:00.336 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:00.336 03:15:23 -- spdk/autotest.sh@117 -- # uname -s 00:05:00.336 03:15:23 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:00.336 03:15:23 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:00.336 03:15:23 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.843 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.843 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.843 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.843 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.843 03:15:25 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:03.223 03:15:26 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:03.223 03:15:26 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:03.223 03:15:26 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:03.223 03:15:26 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:03.223 03:15:26 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:03.223 03:15:26 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:03.223 03:15:26 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.223 03:15:26 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:03.223 03:15:26 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:03.223 03:15:26 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:03.223 03:15:26 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:03.223 03:15:26 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.742 Waiting for block devices as requested 00:05:04.002 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.002 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.261 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.261 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:09.550 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:09.550 03:15:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:09.550 03:15:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:09.550 03:15:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:09.550 03:15:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:09.550 03:15:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:09.550 03:15:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:09.550 03:15:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:09.550 03:15:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:09.550 03:15:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:09.550 03:15:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:09.550 03:15:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:09.550 03:15:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:09.550 03:15:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:09.550 03:15:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:09.550 03:15:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:09.550 03:15:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:09.550 03:15:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:09.550 03:15:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:09.550 03:15:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:09.550 03:15:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:09.550 03:15:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:09.550 03:15:32 -- common/autotest_common.sh@1541 -- # continue 00:05:09.550 03:15:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:09.550 03:15:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:09.550 03:15:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:09.550 03:15:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:09.550 03:15:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:09.550 03:15:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:09.550 03:15:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:09.550 03:15:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:09.550 03:15:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:09.550 03:15:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:09.550 03:15:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:09.550 03:15:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:09.550 03:15:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:09.550 03:15:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:09.550 03:15:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:09.550 03:15:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:09.550 03:15:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:09.550 03:15:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:09.550 03:15:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:09.550 03:15:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:09.550 03:15:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:09.550 03:15:32 -- common/autotest_common.sh@1541 -- # continue 00:05:09.550 03:15:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:09.550 03:15:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:09.550 03:15:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:09.550 03:15:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:09.550 03:15:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:09.550 03:15:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:09.550 03:15:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:09.550 03:15:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:09.550 03:15:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:09.550 03:15:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:09.550 03:15:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:09.550 03:15:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:09.550 03:15:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:09.550 03:15:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:09.550 03:15:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:09.550 03:15:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:09.550 03:15:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:09.550 03:15:33 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:09.550 03:15:33 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:09.550 03:15:33 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:09.550 03:15:33 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:09.550 03:15:33 -- common/autotest_common.sh@1541 -- # continue 00:05:09.550 03:15:33 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:09.550 03:15:33 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:09.550 03:15:33 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:09.550 03:15:33 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:09.550 03:15:33 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:09.550 03:15:33 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:09.550 03:15:33 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:09.550 03:15:33 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:09.550 03:15:33 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:09.550 03:15:33 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:09.550 03:15:33 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:09.550 03:15:33 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:09.550 03:15:33 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:09.550 03:15:33 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:09.550 03:15:33 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:09.550 03:15:33 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:09.550 03:15:33 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:09.550 03:15:33 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:09.550 03:15:33 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:09.550 03:15:33 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:09.550 03:15:33 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:09.550 03:15:33 -- common/autotest_common.sh@1541 -- # continue 00:05:09.550 03:15:33 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:09.550 03:15:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:09.550 03:15:33 -- common/autotest_common.sh@10 -- # set +x 00:05:09.550 03:15:33 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:09.550 03:15:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.550 03:15:33 -- common/autotest_common.sh@10 -- # set +x 00:05:09.809 03:15:33 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.377 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.315 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.316 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.316 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.316 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.316 03:15:34 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:11.316 03:15:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.316 03:15:34 -- common/autotest_common.sh@10 -- # set +x 00:05:11.575 03:15:34 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:11.575 03:15:34 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:11.575 03:15:34 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:11.575 03:15:34 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:11.575 03:15:34 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:11.575 03:15:34 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:11.575 03:15:34 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:11.575 03:15:34 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:11.575 03:15:34 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:11.575 03:15:34 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:11.575 03:15:34 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.575 03:15:34 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:11.575 03:15:34 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:11.575 03:15:35 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:11.575 03:15:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:11.575 03:15:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:11.575 03:15:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:11.575 03:15:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:11.575 03:15:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:11.575 03:15:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:11.575 03:15:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:11.575 03:15:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:11.575 03:15:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:11.575 03:15:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:11.575 03:15:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:11.575 03:15:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:11.575 03:15:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:11.575 03:15:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:11.575 03:15:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:11.575 03:15:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:11.575 03:15:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:11.575 03:15:35 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:11.575 03:15:35 -- common/autotest_common.sh@1570 -- # return 0 00:05:11.575 03:15:35 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:11.575 03:15:35 -- common/autotest_common.sh@1578 -- # return 0 00:05:11.575 03:15:35 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:11.575 03:15:35 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:11.575 03:15:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:11.575 03:15:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:11.575 03:15:35 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:11.575 03:15:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.575 03:15:35 -- common/autotest_common.sh@10 -- # set +x 00:05:11.575 03:15:35 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:11.575 03:15:35 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:11.576 03:15:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:11.576 03:15:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.576 03:15:35 -- common/autotest_common.sh@10 -- # set +x 00:05:11.576 ************************************ 00:05:11.576 START TEST env 00:05:11.576 ************************************ 00:05:11.576 03:15:35 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:11.835 * Looking for test storage... 00:05:11.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:11.835 03:15:35 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:11.835 03:15:35 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:11.835 03:15:35 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:11.835 03:15:35 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:11.835 03:15:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.835 03:15:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.835 03:15:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.835 03:15:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.835 03:15:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.835 03:15:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.835 03:15:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.835 03:15:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.835 03:15:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.835 03:15:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.835 03:15:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.835 03:15:35 env -- scripts/common.sh@344 -- # case "$op" in 00:05:11.835 03:15:35 env -- scripts/common.sh@345 -- # : 1 00:05:11.835 03:15:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.835 03:15:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.835 03:15:35 env -- scripts/common.sh@365 -- # decimal 1 00:05:11.835 03:15:35 env -- scripts/common.sh@353 -- # local d=1 00:05:11.835 03:15:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.835 03:15:35 env -- scripts/common.sh@355 -- # echo 1 00:05:11.835 03:15:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.835 03:15:35 env -- scripts/common.sh@366 -- # decimal 2 00:05:11.835 03:15:35 env -- scripts/common.sh@353 -- # local d=2 00:05:11.835 03:15:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.835 03:15:35 env -- scripts/common.sh@355 -- # echo 2 00:05:11.835 03:15:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.835 03:15:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.835 03:15:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.835 03:15:35 env -- scripts/common.sh@368 -- # return 0 00:05:11.835 03:15:35 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.835 03:15:35 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.835 --rc genhtml_branch_coverage=1 00:05:11.835 --rc genhtml_function_coverage=1 00:05:11.835 --rc genhtml_legend=1 00:05:11.835 --rc geninfo_all_blocks=1 00:05:11.835 --rc geninfo_unexecuted_blocks=1 00:05:11.835 00:05:11.835 ' 00:05:11.835 03:15:35 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.835 --rc genhtml_branch_coverage=1 00:05:11.835 --rc genhtml_function_coverage=1 00:05:11.835 --rc genhtml_legend=1 00:05:11.835 --rc geninfo_all_blocks=1 00:05:11.835 --rc geninfo_unexecuted_blocks=1 00:05:11.835 00:05:11.835 ' 00:05:11.835 03:15:35 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.835 --rc genhtml_branch_coverage=1 00:05:11.835 --rc genhtml_function_coverage=1 00:05:11.835 --rc genhtml_legend=1 00:05:11.835 --rc geninfo_all_blocks=1 00:05:11.835 --rc geninfo_unexecuted_blocks=1 00:05:11.835 00:05:11.835 ' 00:05:11.836 03:15:35 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:11.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.836 --rc genhtml_branch_coverage=1 00:05:11.836 --rc genhtml_function_coverage=1 00:05:11.836 --rc genhtml_legend=1 00:05:11.836 --rc geninfo_all_blocks=1 00:05:11.836 --rc geninfo_unexecuted_blocks=1 00:05:11.836 00:05:11.836 ' 00:05:11.836 03:15:35 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:11.836 03:15:35 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:11.836 03:15:35 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.836 03:15:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.836 ************************************ 00:05:11.836 START TEST env_memory 00:05:11.836 ************************************ 00:05:11.836 03:15:35 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:11.836 00:05:11.836 00:05:11.836 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.836 http://cunit.sourceforge.net/ 00:05:11.836 00:05:11.836 00:05:11.836 Suite: memory 00:05:12.095 Test: alloc and free memory map ...[2024-11-05 03:15:35.450173] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:12.095 passed 00:05:12.095 Test: mem map translation ...[2024-11-05 03:15:35.495114] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:12.095 [2024-11-05 03:15:35.495277] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:12.095 [2024-11-05 03:15:35.495495] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:12.095 [2024-11-05 03:15:35.495610] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:12.095 passed 00:05:12.095 Test: mem map registration ...[2024-11-05 03:15:35.563786] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:12.095 [2024-11-05 03:15:35.563840] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:12.095 passed 00:05:12.095 Test: mem map adjacent registrations ...passed 00:05:12.095 00:05:12.095 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.095 suites 1 1 n/a 0 0 00:05:12.095 tests 4 4 4 0 0 00:05:12.095 asserts 152 152 152 0 n/a 00:05:12.095 00:05:12.095 Elapsed time = 0.243 seconds 00:05:12.389 00:05:12.389 real 0m0.300s 00:05:12.389 user 0m0.254s 00:05:12.389 sys 0m0.035s 00:05:12.389 03:15:35 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:12.389 ************************************ 00:05:12.389 END TEST env_memory 00:05:12.389 ************************************ 00:05:12.389 03:15:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:12.389 03:15:35 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:12.389 03:15:35 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:12.389 03:15:35 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.389 03:15:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.389 ************************************ 00:05:12.389 START TEST env_vtophys 00:05:12.389 ************************************ 00:05:12.389 03:15:35 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:12.389 EAL: lib.eal log level changed from notice to debug 00:05:12.389 EAL: Detected lcore 0 as core 0 on socket 0 00:05:12.389 EAL: Detected lcore 1 as core 0 on socket 0 00:05:12.389 EAL: Detected lcore 2 as core 0 on socket 0 00:05:12.389 EAL: Detected lcore 3 as core 0 on socket 0 00:05:12.389 EAL: Detected lcore 4 as core 0 on socket 0 00:05:12.389 EAL: Detected lcore 5 as core 0 on socket 0 00:05:12.389 EAL: Detected lcore 6 as core 0 on socket 0 00:05:12.389 EAL: Detected lcore 7 as core 0 on socket 0 00:05:12.389 EAL: Detected lcore 8 as core 0 on socket 0 00:05:12.389 EAL: Detected lcore 9 as core 0 on socket 0 00:05:12.389 EAL: Maximum logical cores by configuration: 128 00:05:12.389 EAL: Detected CPU lcores: 10 00:05:12.389 EAL: Detected NUMA nodes: 1 00:05:12.389 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:12.389 EAL: Detected shared linkage of DPDK 00:05:12.389 EAL: No shared files mode enabled, IPC will be disabled 00:05:12.389 EAL: Selected IOVA mode 'PA' 00:05:12.389 EAL: Probing VFIO support... 00:05:12.389 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:12.389 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:12.389 EAL: Ask a virtual area of 0x2e000 bytes 00:05:12.389 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:12.389 EAL: Setting up physically contiguous memory... 00:05:12.389 EAL: Setting maximum number of open files to 524288 00:05:12.389 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:12.389 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:12.389 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.389 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:12.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.389 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.389 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:12.389 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:12.389 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.389 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:12.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.389 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.389 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:12.389 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:12.389 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.389 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:12.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.389 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.389 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:12.389 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:12.389 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.389 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:12.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.389 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.389 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:12.389 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:12.390 EAL: Hugepages will be freed exactly as allocated. 00:05:12.390 EAL: No shared files mode enabled, IPC is disabled 00:05:12.390 EAL: No shared files mode enabled, IPC is disabled 00:05:12.390 EAL: TSC frequency is ~2490000 KHz 00:05:12.390 EAL: Main lcore 0 is ready (tid=7faa196a2a40;cpuset=[0]) 00:05:12.390 EAL: Trying to obtain current memory policy. 00:05:12.390 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.390 EAL: Restoring previous memory policy: 0 00:05:12.390 EAL: request: mp_malloc_sync 00:05:12.390 EAL: No shared files mode enabled, IPC is disabled 00:05:12.390 EAL: Heap on socket 0 was expanded by 2MB 00:05:12.659 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:12.659 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:12.659 EAL: Mem event callback 'spdk:(nil)' registered 00:05:12.659 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:12.659 00:05:12.659 00:05:12.659 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.659 http://cunit.sourceforge.net/ 00:05:12.659 00:05:12.659 00:05:12.659 Suite: components_suite 00:05:13.227 Test: vtophys_malloc_test ...passed 00:05:13.227 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:13.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.227 EAL: Restoring previous memory policy: 4 00:05:13.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.227 EAL: request: mp_malloc_sync 00:05:13.227 EAL: No shared files mode enabled, IPC is disabled 00:05:13.227 EAL: Heap on socket 0 was expanded by 4MB 00:05:13.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.227 EAL: request: mp_malloc_sync 00:05:13.227 EAL: No shared files mode enabled, IPC is disabled 00:05:13.227 EAL: Heap on socket 0 was shrunk by 4MB 00:05:13.227 EAL: Trying to obtain current memory policy. 00:05:13.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.227 EAL: Restoring previous memory policy: 4 00:05:13.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.227 EAL: request: mp_malloc_sync 00:05:13.227 EAL: No shared files mode enabled, IPC is disabled 00:05:13.227 EAL: Heap on socket 0 was expanded by 6MB 00:05:13.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.227 EAL: request: mp_malloc_sync 00:05:13.227 EAL: No shared files mode enabled, IPC is disabled 00:05:13.227 EAL: Heap on socket 0 was shrunk by 6MB 00:05:13.227 EAL: Trying to obtain current memory policy. 00:05:13.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.227 EAL: Restoring previous memory policy: 4 00:05:13.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.227 EAL: request: mp_malloc_sync 00:05:13.227 EAL: No shared files mode enabled, IPC is disabled 00:05:13.227 EAL: Heap on socket 0 was expanded by 10MB 00:05:13.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.227 EAL: request: mp_malloc_sync 00:05:13.227 EAL: No shared files mode enabled, IPC is disabled 00:05:13.227 EAL: Heap on socket 0 was shrunk by 10MB 00:05:13.227 EAL: Trying to obtain current memory policy. 00:05:13.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.227 EAL: Restoring previous memory policy: 4 00:05:13.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.227 EAL: request: mp_malloc_sync 00:05:13.227 EAL: No shared files mode enabled, IPC is disabled 00:05:13.227 EAL: Heap on socket 0 was expanded by 18MB 00:05:13.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.227 EAL: request: mp_malloc_sync 00:05:13.227 EAL: No shared files mode enabled, IPC is disabled 00:05:13.227 EAL: Heap on socket 0 was shrunk by 18MB 00:05:13.227 EAL: Trying to obtain current memory policy. 00:05:13.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.227 EAL: Restoring previous memory policy: 4 00:05:13.228 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.228 EAL: request: mp_malloc_sync 00:05:13.228 EAL: No shared files mode enabled, IPC is disabled 00:05:13.228 EAL: Heap on socket 0 was expanded by 34MB 00:05:13.488 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.488 EAL: request: mp_malloc_sync 00:05:13.488 EAL: No shared files mode enabled, IPC is disabled 00:05:13.488 EAL: Heap on socket 0 was shrunk by 34MB 00:05:13.488 EAL: Trying to obtain current memory policy. 00:05:13.488 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.488 EAL: Restoring previous memory policy: 4 00:05:13.488 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.488 EAL: request: mp_malloc_sync 00:05:13.488 EAL: No shared files mode enabled, IPC is disabled 00:05:13.488 EAL: Heap on socket 0 was expanded by 66MB 00:05:13.488 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.488 EAL: request: mp_malloc_sync 00:05:13.488 EAL: No shared files mode enabled, IPC is disabled 00:05:13.488 EAL: Heap on socket 0 was shrunk by 66MB 00:05:13.746 EAL: Trying to obtain current memory policy. 00:05:13.746 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.746 EAL: Restoring previous memory policy: 4 00:05:13.746 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.746 EAL: request: mp_malloc_sync 00:05:13.746 EAL: No shared files mode enabled, IPC is disabled 00:05:13.746 EAL: Heap on socket 0 was expanded by 130MB 00:05:14.005 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.005 EAL: request: mp_malloc_sync 00:05:14.005 EAL: No shared files mode enabled, IPC is disabled 00:05:14.005 EAL: Heap on socket 0 was shrunk by 130MB 00:05:14.263 EAL: Trying to obtain current memory policy. 00:05:14.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.521 EAL: Restoring previous memory policy: 4 00:05:14.521 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.521 EAL: request: mp_malloc_sync 00:05:14.521 EAL: No shared files mode enabled, IPC is disabled 00:05:14.521 EAL: Heap on socket 0 was expanded by 258MB 00:05:14.779 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.038 EAL: request: mp_malloc_sync 00:05:15.038 EAL: No shared files mode enabled, IPC is disabled 00:05:15.038 EAL: Heap on socket 0 was shrunk by 258MB 00:05:15.297 EAL: Trying to obtain current memory policy. 00:05:15.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.555 EAL: Restoring previous memory policy: 4 00:05:15.555 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.555 EAL: request: mp_malloc_sync 00:05:15.555 EAL: No shared files mode enabled, IPC is disabled 00:05:15.555 EAL: Heap on socket 0 was expanded by 514MB 00:05:16.518 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.777 EAL: request: mp_malloc_sync 00:05:16.777 EAL: No shared files mode enabled, IPC is disabled 00:05:16.777 EAL: Heap on socket 0 was shrunk by 514MB 00:05:17.715 EAL: Trying to obtain current memory policy. 00:05:17.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.974 EAL: Restoring previous memory policy: 4 00:05:17.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.974 EAL: request: mp_malloc_sync 00:05:17.974 EAL: No shared files mode enabled, IPC is disabled 00:05:17.974 EAL: Heap on socket 0 was expanded by 1026MB 00:05:19.881 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.139 EAL: request: mp_malloc_sync 00:05:20.139 EAL: No shared files mode enabled, IPC is disabled 00:05:20.139 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:22.044 passed 00:05:22.044 00:05:22.044 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.044 suites 1 1 n/a 0 0 00:05:22.044 tests 2 2 2 0 0 00:05:22.044 asserts 5796 5796 5796 0 n/a 00:05:22.044 00:05:22.044 Elapsed time = 9.249 seconds 00:05:22.044 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.044 EAL: request: mp_malloc_sync 00:05:22.044 EAL: No shared files mode enabled, IPC is disabled 00:05:22.044 EAL: Heap on socket 0 was shrunk by 2MB 00:05:22.044 EAL: No shared files mode enabled, IPC is disabled 00:05:22.044 EAL: No shared files mode enabled, IPC is disabled 00:05:22.044 EAL: No shared files mode enabled, IPC is disabled 00:05:22.044 ************************************ 00:05:22.044 END TEST env_vtophys 00:05:22.044 ************************************ 00:05:22.044 00:05:22.044 real 0m9.628s 00:05:22.044 user 0m8.058s 00:05:22.044 sys 0m1.384s 00:05:22.044 03:15:45 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.044 03:15:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:22.044 03:15:45 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:22.044 03:15:45 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:22.044 03:15:45 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.044 03:15:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.044 ************************************ 00:05:22.044 START TEST env_pci 00:05:22.044 ************************************ 00:05:22.044 03:15:45 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:22.044 00:05:22.044 00:05:22.044 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.044 http://cunit.sourceforge.net/ 00:05:22.044 00:05:22.044 00:05:22.044 Suite: pci 00:05:22.044 Test: pci_hook ...[2024-11-05 03:15:45.503560] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57672 has claimed it 00:05:22.044 passed 00:05:22.044 00:05:22.044 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.044 suites 1 1 n/a 0 0 00:05:22.044 tests 1 1 1 0 0 00:05:22.044 asserts 25 25 25 0 n/a 00:05:22.044 00:05:22.044 Elapsed time = 0.010 seconds 00:05:22.044 EAL: Cannot find device (10000:00:01.0) 00:05:22.044 EAL: Failed to attach device on primary process 00:05:22.044 00:05:22.044 real 0m0.111s 00:05:22.044 user 0m0.037s 00:05:22.044 sys 0m0.072s 00:05:22.044 ************************************ 00:05:22.044 END TEST env_pci 00:05:22.044 ************************************ 00:05:22.044 03:15:45 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.044 03:15:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:22.044 03:15:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:22.303 03:15:45 env -- env/env.sh@15 -- # uname 00:05:22.303 03:15:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:22.303 03:15:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:22.303 03:15:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.303 03:15:45 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:22.303 03:15:45 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.303 03:15:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.303 ************************************ 00:05:22.303 START TEST env_dpdk_post_init 00:05:22.303 ************************************ 00:05:22.303 03:15:45 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.303 EAL: Detected CPU lcores: 10 00:05:22.303 EAL: Detected NUMA nodes: 1 00:05:22.303 EAL: Detected shared linkage of DPDK 00:05:22.303 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.303 EAL: Selected IOVA mode 'PA' 00:05:22.303 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.563 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:22.563 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:22.563 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:22.563 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:22.563 Starting DPDK initialization... 00:05:22.563 Starting SPDK post initialization... 00:05:22.563 SPDK NVMe probe 00:05:22.563 Attaching to 0000:00:10.0 00:05:22.563 Attaching to 0000:00:11.0 00:05:22.563 Attaching to 0000:00:12.0 00:05:22.563 Attaching to 0000:00:13.0 00:05:22.563 Attached to 0000:00:10.0 00:05:22.563 Attached to 0000:00:11.0 00:05:22.563 Attached to 0000:00:13.0 00:05:22.563 Attached to 0000:00:12.0 00:05:22.563 Cleaning up... 00:05:22.563 ************************************ 00:05:22.563 END TEST env_dpdk_post_init 00:05:22.563 ************************************ 00:05:22.563 00:05:22.563 real 0m0.316s 00:05:22.563 user 0m0.111s 00:05:22.563 sys 0m0.108s 00:05:22.563 03:15:45 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.563 03:15:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:22.563 03:15:46 env -- env/env.sh@26 -- # uname 00:05:22.563 03:15:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:22.563 03:15:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:22.563 03:15:46 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:22.563 03:15:46 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.563 03:15:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.563 ************************************ 00:05:22.563 START TEST env_mem_callbacks 00:05:22.563 ************************************ 00:05:22.563 03:15:46 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:22.563 EAL: Detected CPU lcores: 10 00:05:22.563 EAL: Detected NUMA nodes: 1 00:05:22.563 EAL: Detected shared linkage of DPDK 00:05:22.563 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.563 EAL: Selected IOVA mode 'PA' 00:05:22.822 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.822 00:05:22.822 00:05:22.822 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.822 http://cunit.sourceforge.net/ 00:05:22.822 00:05:22.822 00:05:22.822 Suite: memory 00:05:22.822 Test: test ... 00:05:22.822 register 0x200000200000 2097152 00:05:22.822 malloc 3145728 00:05:22.822 register 0x200000400000 4194304 00:05:22.822 buf 0x2000004fffc0 len 3145728 PASSED 00:05:22.822 malloc 64 00:05:22.822 buf 0x2000004ffec0 len 64 PASSED 00:05:22.822 malloc 4194304 00:05:22.822 register 0x200000800000 6291456 00:05:22.822 buf 0x2000009fffc0 len 4194304 PASSED 00:05:22.822 free 0x2000004fffc0 3145728 00:05:22.822 free 0x2000004ffec0 64 00:05:22.822 unregister 0x200000400000 4194304 PASSED 00:05:22.822 free 0x2000009fffc0 4194304 00:05:22.822 unregister 0x200000800000 6291456 PASSED 00:05:22.822 malloc 8388608 00:05:22.822 register 0x200000400000 10485760 00:05:22.822 buf 0x2000005fffc0 len 8388608 PASSED 00:05:22.822 free 0x2000005fffc0 8388608 00:05:22.822 unregister 0x200000400000 10485760 PASSED 00:05:22.822 passed 00:05:22.822 00:05:22.823 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.823 suites 1 1 n/a 0 0 00:05:22.823 tests 1 1 1 0 0 00:05:22.823 asserts 15 15 15 0 n/a 00:05:22.823 00:05:22.823 Elapsed time = 0.087 seconds 00:05:22.823 00:05:22.823 real 0m0.309s 00:05:22.823 user 0m0.114s 00:05:22.823 sys 0m0.090s 00:05:22.823 ************************************ 00:05:22.823 END TEST env_mem_callbacks 00:05:22.823 ************************************ 00:05:22.823 03:15:46 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.823 03:15:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:23.082 ************************************ 00:05:23.082 END TEST env 00:05:23.082 ************************************ 00:05:23.082 00:05:23.082 real 0m11.317s 00:05:23.082 user 0m8.831s 00:05:23.082 sys 0m2.081s 00:05:23.082 03:15:46 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.082 03:15:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.082 03:15:46 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:23.082 03:15:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.082 03:15:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.082 03:15:46 -- common/autotest_common.sh@10 -- # set +x 00:05:23.082 ************************************ 00:05:23.082 START TEST rpc 00:05:23.082 ************************************ 00:05:23.082 03:15:46 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:23.082 * Looking for test storage... 00:05:23.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:23.082 03:15:46 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:23.082 03:15:46 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:23.082 03:15:46 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:23.341 03:15:46 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:23.341 03:15:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.341 03:15:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.341 03:15:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.341 03:15:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.341 03:15:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.341 03:15:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.341 03:15:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.341 03:15:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.341 03:15:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.341 03:15:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.341 03:15:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.341 03:15:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:23.341 03:15:46 rpc -- scripts/common.sh@345 -- # : 1 00:05:23.342 03:15:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.342 03:15:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.342 03:15:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:23.342 03:15:46 rpc -- scripts/common.sh@353 -- # local d=1 00:05:23.342 03:15:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.342 03:15:46 rpc -- scripts/common.sh@355 -- # echo 1 00:05:23.342 03:15:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.342 03:15:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:23.342 03:15:46 rpc -- scripts/common.sh@353 -- # local d=2 00:05:23.342 03:15:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.342 03:15:46 rpc -- scripts/common.sh@355 -- # echo 2 00:05:23.342 03:15:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.342 03:15:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.342 03:15:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.342 03:15:46 rpc -- scripts/common.sh@368 -- # return 0 00:05:23.342 03:15:46 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.342 03:15:46 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:23.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.342 --rc genhtml_branch_coverage=1 00:05:23.342 --rc genhtml_function_coverage=1 00:05:23.342 --rc genhtml_legend=1 00:05:23.342 --rc geninfo_all_blocks=1 00:05:23.342 --rc geninfo_unexecuted_blocks=1 00:05:23.342 00:05:23.342 ' 00:05:23.342 03:15:46 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:23.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.342 --rc genhtml_branch_coverage=1 00:05:23.342 --rc genhtml_function_coverage=1 00:05:23.342 --rc genhtml_legend=1 00:05:23.342 --rc geninfo_all_blocks=1 00:05:23.342 --rc geninfo_unexecuted_blocks=1 00:05:23.342 00:05:23.342 ' 00:05:23.342 03:15:46 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:23.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.342 --rc genhtml_branch_coverage=1 00:05:23.342 --rc genhtml_function_coverage=1 00:05:23.342 --rc genhtml_legend=1 00:05:23.342 --rc geninfo_all_blocks=1 00:05:23.342 --rc geninfo_unexecuted_blocks=1 00:05:23.342 00:05:23.342 ' 00:05:23.342 03:15:46 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:23.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.342 --rc genhtml_branch_coverage=1 00:05:23.342 --rc genhtml_function_coverage=1 00:05:23.342 --rc genhtml_legend=1 00:05:23.342 --rc geninfo_all_blocks=1 00:05:23.342 --rc geninfo_unexecuted_blocks=1 00:05:23.342 00:05:23.342 ' 00:05:23.342 03:15:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57806 00:05:23.342 03:15:46 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:23.342 03:15:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.342 03:15:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57806 00:05:23.342 03:15:46 rpc -- common/autotest_common.sh@833 -- # '[' -z 57806 ']' 00:05:23.342 03:15:46 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.342 03:15:46 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.342 03:15:46 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.342 03:15:46 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.342 03:15:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.342 [2024-11-05 03:15:46.882193] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:05:23.342 [2024-11-05 03:15:46.882389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57806 ] 00:05:23.601 [2024-11-05 03:15:47.076919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.860 [2024-11-05 03:15:47.235163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:23.860 [2024-11-05 03:15:47.235261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57806' to capture a snapshot of events at runtime. 00:05:23.860 [2024-11-05 03:15:47.235276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:23.861 [2024-11-05 03:15:47.235310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:23.861 [2024-11-05 03:15:47.235322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57806 for offline analysis/debug. 00:05:23.861 [2024-11-05 03:15:47.236888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.798 03:15:48 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.798 03:15:48 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:24.798 03:15:48 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:24.798 03:15:48 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:24.799 03:15:48 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:24.799 03:15:48 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:24.799 03:15:48 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:24.799 03:15:48 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:24.799 03:15:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.799 ************************************ 00:05:24.799 START TEST rpc_integrity 00:05:24.799 ************************************ 00:05:24.799 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:24.799 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:24.799 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.799 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.799 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.799 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:24.799 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:25.058 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:25.058 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:25.058 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.058 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.058 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.058 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:25.058 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.058 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.058 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.058 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.058 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.058 { 00:05:25.058 "name": "Malloc0", 00:05:25.058 "aliases": [ 00:05:25.058 "209c267d-5c2d-4ac9-b89c-ed5470bd41b1" 00:05:25.058 ], 00:05:25.058 "product_name": "Malloc disk", 00:05:25.058 "block_size": 512, 00:05:25.058 "num_blocks": 16384, 00:05:25.058 "uuid": "209c267d-5c2d-4ac9-b89c-ed5470bd41b1", 00:05:25.058 "assigned_rate_limits": { 00:05:25.058 "rw_ios_per_sec": 0, 00:05:25.058 "rw_mbytes_per_sec": 0, 00:05:25.058 "r_mbytes_per_sec": 0, 00:05:25.058 "w_mbytes_per_sec": 0 00:05:25.058 }, 00:05:25.058 "claimed": false, 00:05:25.058 "zoned": false, 00:05:25.058 "supported_io_types": { 00:05:25.058 "read": true, 00:05:25.058 "write": true, 00:05:25.058 "unmap": true, 00:05:25.058 "flush": true, 00:05:25.058 "reset": true, 00:05:25.058 "nvme_admin": false, 00:05:25.058 "nvme_io": false, 00:05:25.058 "nvme_io_md": false, 00:05:25.058 "write_zeroes": true, 00:05:25.058 "zcopy": true, 00:05:25.058 "get_zone_info": false, 00:05:25.058 "zone_management": false, 00:05:25.058 "zone_append": false, 00:05:25.058 "compare": false, 00:05:25.058 "compare_and_write": false, 00:05:25.058 "abort": true, 00:05:25.058 "seek_hole": false, 00:05:25.058 "seek_data": false, 00:05:25.058 "copy": true, 00:05:25.058 "nvme_iov_md": false 00:05:25.058 }, 00:05:25.058 "memory_domains": [ 00:05:25.058 { 00:05:25.058 "dma_device_id": "system", 00:05:25.058 "dma_device_type": 1 00:05:25.058 }, 00:05:25.058 { 00:05:25.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.058 "dma_device_type": 2 00:05:25.058 } 00:05:25.058 ], 00:05:25.058 "driver_specific": {} 00:05:25.058 } 00:05:25.058 ]' 00:05:25.058 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:25.058 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.058 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:25.058 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.058 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.058 [2024-11-05 03:15:48.520490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:25.058 [2024-11-05 03:15:48.520586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.058 [2024-11-05 03:15:48.520629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:25.058 [2024-11-05 03:15:48.520647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.058 [2024-11-05 03:15:48.523687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.058 [2024-11-05 03:15:48.523738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.058 Passthru0 00:05:25.058 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.058 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.058 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.058 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.058 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.058 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.058 { 00:05:25.058 "name": "Malloc0", 00:05:25.058 "aliases": [ 00:05:25.058 "209c267d-5c2d-4ac9-b89c-ed5470bd41b1" 00:05:25.058 ], 00:05:25.058 "product_name": "Malloc disk", 00:05:25.058 "block_size": 512, 00:05:25.058 "num_blocks": 16384, 00:05:25.058 "uuid": "209c267d-5c2d-4ac9-b89c-ed5470bd41b1", 00:05:25.058 "assigned_rate_limits": { 00:05:25.058 "rw_ios_per_sec": 0, 00:05:25.058 "rw_mbytes_per_sec": 0, 00:05:25.058 "r_mbytes_per_sec": 0, 00:05:25.058 "w_mbytes_per_sec": 0 00:05:25.058 }, 00:05:25.058 "claimed": true, 00:05:25.058 "claim_type": "exclusive_write", 00:05:25.058 "zoned": false, 00:05:25.058 "supported_io_types": { 00:05:25.058 "read": true, 00:05:25.058 "write": true, 00:05:25.058 "unmap": true, 00:05:25.058 "flush": true, 00:05:25.058 "reset": true, 00:05:25.058 "nvme_admin": false, 00:05:25.058 "nvme_io": false, 00:05:25.058 "nvme_io_md": false, 00:05:25.058 "write_zeroes": true, 00:05:25.058 "zcopy": true, 00:05:25.058 "get_zone_info": false, 00:05:25.058 "zone_management": false, 00:05:25.058 "zone_append": false, 00:05:25.058 "compare": false, 00:05:25.058 "compare_and_write": false, 00:05:25.058 "abort": true, 00:05:25.058 "seek_hole": false, 00:05:25.058 "seek_data": false, 00:05:25.058 "copy": true, 00:05:25.058 "nvme_iov_md": false 00:05:25.058 }, 00:05:25.058 "memory_domains": [ 00:05:25.058 { 00:05:25.058 "dma_device_id": "system", 00:05:25.058 "dma_device_type": 1 00:05:25.058 }, 00:05:25.058 { 00:05:25.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.058 "dma_device_type": 2 00:05:25.058 } 00:05:25.058 ], 00:05:25.058 "driver_specific": {} 00:05:25.058 }, 00:05:25.058 { 00:05:25.058 "name": "Passthru0", 00:05:25.058 "aliases": [ 00:05:25.058 "7aa59c35-6cac-5e01-aea3-092394104d83" 00:05:25.058 ], 00:05:25.058 "product_name": "passthru", 00:05:25.058 "block_size": 512, 00:05:25.058 "num_blocks": 16384, 00:05:25.058 "uuid": "7aa59c35-6cac-5e01-aea3-092394104d83", 00:05:25.058 "assigned_rate_limits": { 00:05:25.058 "rw_ios_per_sec": 0, 00:05:25.058 "rw_mbytes_per_sec": 0, 00:05:25.058 "r_mbytes_per_sec": 0, 00:05:25.058 "w_mbytes_per_sec": 0 00:05:25.059 }, 00:05:25.059 "claimed": false, 00:05:25.059 "zoned": false, 00:05:25.059 "supported_io_types": { 00:05:25.059 "read": true, 00:05:25.059 "write": true, 00:05:25.059 "unmap": true, 00:05:25.059 "flush": true, 00:05:25.059 "reset": true, 00:05:25.059 "nvme_admin": false, 00:05:25.059 "nvme_io": false, 00:05:25.059 "nvme_io_md": false, 00:05:25.059 "write_zeroes": true, 00:05:25.059 "zcopy": true, 00:05:25.059 "get_zone_info": false, 00:05:25.059 "zone_management": false, 00:05:25.059 "zone_append": false, 00:05:25.059 "compare": false, 00:05:25.059 "compare_and_write": false, 00:05:25.059 "abort": true, 00:05:25.059 "seek_hole": false, 00:05:25.059 "seek_data": false, 00:05:25.059 "copy": true, 00:05:25.059 "nvme_iov_md": false 00:05:25.059 }, 00:05:25.059 "memory_domains": [ 00:05:25.059 { 00:05:25.059 "dma_device_id": "system", 00:05:25.059 "dma_device_type": 1 00:05:25.059 }, 00:05:25.059 { 00:05:25.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.059 "dma_device_type": 2 00:05:25.059 } 00:05:25.059 ], 00:05:25.059 "driver_specific": { 00:05:25.059 "passthru": { 00:05:25.059 "name": "Passthru0", 00:05:25.059 "base_bdev_name": "Malloc0" 00:05:25.059 } 00:05:25.059 } 00:05:25.059 } 00:05:25.059 ]' 00:05:25.059 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:25.059 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.059 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.059 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.059 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.059 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.059 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:25.059 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.059 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.318 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.318 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.318 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.318 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.318 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.318 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.318 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:25.318 ************************************ 00:05:25.318 END TEST rpc_integrity 00:05:25.318 ************************************ 00:05:25.318 03:15:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.318 00:05:25.318 real 0m0.362s 00:05:25.318 user 0m0.192s 00:05:25.318 sys 0m0.064s 00:05:25.318 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.318 03:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.318 03:15:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:25.318 03:15:48 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.318 03:15:48 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.318 03:15:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.318 ************************************ 00:05:25.318 START TEST rpc_plugins 00:05:25.318 ************************************ 00:05:25.318 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:25.318 03:15:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:25.318 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.318 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.318 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.318 03:15:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:25.318 03:15:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:25.318 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.318 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.318 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.318 03:15:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:25.318 { 00:05:25.318 "name": "Malloc1", 00:05:25.318 "aliases": [ 00:05:25.318 "194abc4c-2931-4314-b9be-b9b12131d65b" 00:05:25.318 ], 00:05:25.318 "product_name": "Malloc disk", 00:05:25.318 "block_size": 4096, 00:05:25.318 "num_blocks": 256, 00:05:25.318 "uuid": "194abc4c-2931-4314-b9be-b9b12131d65b", 00:05:25.318 "assigned_rate_limits": { 00:05:25.318 "rw_ios_per_sec": 0, 00:05:25.318 "rw_mbytes_per_sec": 0, 00:05:25.318 "r_mbytes_per_sec": 0, 00:05:25.318 "w_mbytes_per_sec": 0 00:05:25.318 }, 00:05:25.318 "claimed": false, 00:05:25.318 "zoned": false, 00:05:25.318 "supported_io_types": { 00:05:25.318 "read": true, 00:05:25.318 "write": true, 00:05:25.318 "unmap": true, 00:05:25.318 "flush": true, 00:05:25.318 "reset": true, 00:05:25.318 "nvme_admin": false, 00:05:25.318 "nvme_io": false, 00:05:25.318 "nvme_io_md": false, 00:05:25.318 "write_zeroes": true, 00:05:25.318 "zcopy": true, 00:05:25.318 "get_zone_info": false, 00:05:25.318 "zone_management": false, 00:05:25.318 "zone_append": false, 00:05:25.318 "compare": false, 00:05:25.318 "compare_and_write": false, 00:05:25.318 "abort": true, 00:05:25.318 "seek_hole": false, 00:05:25.318 "seek_data": false, 00:05:25.318 "copy": true, 00:05:25.318 "nvme_iov_md": false 00:05:25.318 }, 00:05:25.318 "memory_domains": [ 00:05:25.318 { 00:05:25.318 "dma_device_id": "system", 00:05:25.318 "dma_device_type": 1 00:05:25.318 }, 00:05:25.318 { 00:05:25.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.318 "dma_device_type": 2 00:05:25.318 } 00:05:25.318 ], 00:05:25.318 "driver_specific": {} 00:05:25.318 } 00:05:25.318 ]' 00:05:25.318 03:15:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:25.318 03:15:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:25.318 03:15:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:25.318 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.318 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.580 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.580 03:15:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:25.580 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.580 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.580 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.580 03:15:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:25.580 03:15:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:25.580 ************************************ 00:05:25.580 END TEST rpc_plugins 00:05:25.580 ************************************ 00:05:25.580 03:15:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:25.580 00:05:25.580 real 0m0.176s 00:05:25.580 user 0m0.096s 00:05:25.580 sys 0m0.031s 00:05:25.580 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.580 03:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.580 03:15:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:25.580 03:15:49 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.580 03:15:49 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.580 03:15:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.580 ************************************ 00:05:25.580 START TEST rpc_trace_cmd_test 00:05:25.581 ************************************ 00:05:25.581 03:15:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:25.581 03:15:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:25.581 03:15:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:25.581 03:15:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.581 03:15:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:25.581 03:15:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.581 03:15:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:25.581 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57806", 00:05:25.581 "tpoint_group_mask": "0x8", 00:05:25.581 "iscsi_conn": { 00:05:25.581 "mask": "0x2", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "scsi": { 00:05:25.581 "mask": "0x4", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "bdev": { 00:05:25.581 "mask": "0x8", 00:05:25.581 "tpoint_mask": "0xffffffffffffffff" 00:05:25.581 }, 00:05:25.581 "nvmf_rdma": { 00:05:25.581 "mask": "0x10", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "nvmf_tcp": { 00:05:25.581 "mask": "0x20", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "ftl": { 00:05:25.581 "mask": "0x40", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "blobfs": { 00:05:25.581 "mask": "0x80", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "dsa": { 00:05:25.581 "mask": "0x200", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "thread": { 00:05:25.581 "mask": "0x400", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "nvme_pcie": { 00:05:25.581 "mask": "0x800", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "iaa": { 00:05:25.581 "mask": "0x1000", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "nvme_tcp": { 00:05:25.581 "mask": "0x2000", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "bdev_nvme": { 00:05:25.581 "mask": "0x4000", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "sock": { 00:05:25.581 "mask": "0x8000", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "blob": { 00:05:25.581 "mask": "0x10000", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "bdev_raid": { 00:05:25.581 "mask": "0x20000", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 }, 00:05:25.581 "scheduler": { 00:05:25.581 "mask": "0x40000", 00:05:25.581 "tpoint_mask": "0x0" 00:05:25.581 } 00:05:25.581 }' 00:05:25.581 03:15:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:25.581 03:15:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:25.581 03:15:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:25.581 03:15:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:25.841 03:15:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:25.841 03:15:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:25.841 03:15:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:25.841 03:15:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:25.841 03:15:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:25.841 ************************************ 00:05:25.841 END TEST rpc_trace_cmd_test 00:05:25.841 ************************************ 00:05:25.841 03:15:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:25.841 00:05:25.841 real 0m0.268s 00:05:25.841 user 0m0.209s 00:05:25.841 sys 0m0.048s 00:05:25.841 03:15:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.841 03:15:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:25.841 03:15:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:25.841 03:15:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:25.841 03:15:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:25.841 03:15:49 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.841 03:15:49 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.841 03:15:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.841 ************************************ 00:05:25.841 START TEST rpc_daemon_integrity 00:05:25.841 ************************************ 00:05:25.841 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:25.841 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.841 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.841 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.841 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.841 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.841 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.100 { 00:05:26.100 "name": "Malloc2", 00:05:26.100 "aliases": [ 00:05:26.100 "433aa928-59ce-4e76-b397-952ba68e60a5" 00:05:26.100 ], 00:05:26.100 "product_name": "Malloc disk", 00:05:26.100 "block_size": 512, 00:05:26.100 "num_blocks": 16384, 00:05:26.100 "uuid": "433aa928-59ce-4e76-b397-952ba68e60a5", 00:05:26.100 "assigned_rate_limits": { 00:05:26.100 "rw_ios_per_sec": 0, 00:05:26.100 "rw_mbytes_per_sec": 0, 00:05:26.100 "r_mbytes_per_sec": 0, 00:05:26.100 "w_mbytes_per_sec": 0 00:05:26.100 }, 00:05:26.100 "claimed": false, 00:05:26.100 "zoned": false, 00:05:26.100 "supported_io_types": { 00:05:26.100 "read": true, 00:05:26.100 "write": true, 00:05:26.100 "unmap": true, 00:05:26.100 "flush": true, 00:05:26.100 "reset": true, 00:05:26.100 "nvme_admin": false, 00:05:26.100 "nvme_io": false, 00:05:26.100 "nvme_io_md": false, 00:05:26.100 "write_zeroes": true, 00:05:26.100 "zcopy": true, 00:05:26.100 "get_zone_info": false, 00:05:26.100 "zone_management": false, 00:05:26.100 "zone_append": false, 00:05:26.100 "compare": false, 00:05:26.100 "compare_and_write": false, 00:05:26.100 "abort": true, 00:05:26.100 "seek_hole": false, 00:05:26.100 "seek_data": false, 00:05:26.100 "copy": true, 00:05:26.100 "nvme_iov_md": false 00:05:26.100 }, 00:05:26.100 "memory_domains": [ 00:05:26.100 { 00:05:26.100 "dma_device_id": "system", 00:05:26.100 "dma_device_type": 1 00:05:26.100 }, 00:05:26.100 { 00:05:26.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.100 "dma_device_type": 2 00:05:26.100 } 00:05:26.100 ], 00:05:26.100 "driver_specific": {} 00:05:26.100 } 00:05:26.100 ]' 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.100 [2024-11-05 03:15:49.546358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:26.100 [2024-11-05 03:15:49.546686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.100 [2024-11-05 03:15:49.546727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:26.100 [2024-11-05 03:15:49.546745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.100 [2024-11-05 03:15:49.549876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.100 [2024-11-05 03:15:49.550044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.100 Passthru0 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.100 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.100 { 00:05:26.100 "name": "Malloc2", 00:05:26.100 "aliases": [ 00:05:26.100 "433aa928-59ce-4e76-b397-952ba68e60a5" 00:05:26.100 ], 00:05:26.100 "product_name": "Malloc disk", 00:05:26.100 "block_size": 512, 00:05:26.100 "num_blocks": 16384, 00:05:26.100 "uuid": "433aa928-59ce-4e76-b397-952ba68e60a5", 00:05:26.100 "assigned_rate_limits": { 00:05:26.100 "rw_ios_per_sec": 0, 00:05:26.100 "rw_mbytes_per_sec": 0, 00:05:26.100 "r_mbytes_per_sec": 0, 00:05:26.100 "w_mbytes_per_sec": 0 00:05:26.100 }, 00:05:26.100 "claimed": true, 00:05:26.100 "claim_type": "exclusive_write", 00:05:26.100 "zoned": false, 00:05:26.100 "supported_io_types": { 00:05:26.100 "read": true, 00:05:26.100 "write": true, 00:05:26.100 "unmap": true, 00:05:26.100 "flush": true, 00:05:26.100 "reset": true, 00:05:26.100 "nvme_admin": false, 00:05:26.100 "nvme_io": false, 00:05:26.100 "nvme_io_md": false, 00:05:26.100 "write_zeroes": true, 00:05:26.100 "zcopy": true, 00:05:26.100 "get_zone_info": false, 00:05:26.100 "zone_management": false, 00:05:26.100 "zone_append": false, 00:05:26.100 "compare": false, 00:05:26.100 "compare_and_write": false, 00:05:26.100 "abort": true, 00:05:26.100 "seek_hole": false, 00:05:26.100 "seek_data": false, 00:05:26.100 "copy": true, 00:05:26.100 "nvme_iov_md": false 00:05:26.100 }, 00:05:26.100 "memory_domains": [ 00:05:26.100 { 00:05:26.100 "dma_device_id": "system", 00:05:26.100 "dma_device_type": 1 00:05:26.100 }, 00:05:26.100 { 00:05:26.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.100 "dma_device_type": 2 00:05:26.100 } 00:05:26.101 ], 00:05:26.101 "driver_specific": {} 00:05:26.101 }, 00:05:26.101 { 00:05:26.101 "name": "Passthru0", 00:05:26.101 "aliases": [ 00:05:26.101 "8b761edb-c62e-5122-9384-27aa4765eece" 00:05:26.101 ], 00:05:26.101 "product_name": "passthru", 00:05:26.101 "block_size": 512, 00:05:26.101 "num_blocks": 16384, 00:05:26.101 "uuid": "8b761edb-c62e-5122-9384-27aa4765eece", 00:05:26.101 "assigned_rate_limits": { 00:05:26.101 "rw_ios_per_sec": 0, 00:05:26.101 "rw_mbytes_per_sec": 0, 00:05:26.101 "r_mbytes_per_sec": 0, 00:05:26.101 "w_mbytes_per_sec": 0 00:05:26.101 }, 00:05:26.101 "claimed": false, 00:05:26.101 "zoned": false, 00:05:26.101 "supported_io_types": { 00:05:26.101 "read": true, 00:05:26.101 "write": true, 00:05:26.101 "unmap": true, 00:05:26.101 "flush": true, 00:05:26.101 "reset": true, 00:05:26.101 "nvme_admin": false, 00:05:26.101 "nvme_io": false, 00:05:26.101 "nvme_io_md": false, 00:05:26.101 "write_zeroes": true, 00:05:26.101 "zcopy": true, 00:05:26.101 "get_zone_info": false, 00:05:26.101 "zone_management": false, 00:05:26.101 "zone_append": false, 00:05:26.101 "compare": false, 00:05:26.101 "compare_and_write": false, 00:05:26.101 "abort": true, 00:05:26.101 "seek_hole": false, 00:05:26.101 "seek_data": false, 00:05:26.101 "copy": true, 00:05:26.101 "nvme_iov_md": false 00:05:26.101 }, 00:05:26.101 "memory_domains": [ 00:05:26.101 { 00:05:26.101 "dma_device_id": "system", 00:05:26.101 "dma_device_type": 1 00:05:26.101 }, 00:05:26.101 { 00:05:26.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.101 "dma_device_type": 2 00:05:26.101 } 00:05:26.101 ], 00:05:26.101 "driver_specific": { 00:05:26.101 "passthru": { 00:05:26.101 "name": "Passthru0", 00:05:26.101 "base_bdev_name": "Malloc2" 00:05:26.101 } 00:05:26.101 } 00:05:26.101 } 00:05:26.101 ]' 00:05:26.101 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.101 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.101 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.101 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.101 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.101 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.101 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:26.101 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.101 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.101 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.360 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.360 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.360 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.360 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.360 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.360 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.360 ************************************ 00:05:26.360 END TEST rpc_daemon_integrity 00:05:26.360 ************************************ 00:05:26.360 03:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.360 00:05:26.360 real 0m0.360s 00:05:26.360 user 0m0.191s 00:05:26.360 sys 0m0.062s 00:05:26.360 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.360 03:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.360 03:15:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:26.360 03:15:49 rpc -- rpc/rpc.sh@84 -- # killprocess 57806 00:05:26.360 03:15:49 rpc -- common/autotest_common.sh@952 -- # '[' -z 57806 ']' 00:05:26.360 03:15:49 rpc -- common/autotest_common.sh@956 -- # kill -0 57806 00:05:26.360 03:15:49 rpc -- common/autotest_common.sh@957 -- # uname 00:05:26.360 03:15:49 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:26.360 03:15:49 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57806 00:05:26.360 killing process with pid 57806 00:05:26.360 03:15:49 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:26.360 03:15:49 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:26.360 03:15:49 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57806' 00:05:26.360 03:15:49 rpc -- common/autotest_common.sh@971 -- # kill 57806 00:05:26.360 03:15:49 rpc -- common/autotest_common.sh@976 -- # wait 57806 00:05:29.649 00:05:29.649 real 0m5.999s 00:05:29.649 user 0m6.327s 00:05:29.649 sys 0m1.240s 00:05:29.649 03:15:52 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.649 ************************************ 00:05:29.649 END TEST rpc 00:05:29.649 ************************************ 00:05:29.649 03:15:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.649 03:15:52 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:29.649 03:15:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.649 03:15:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.650 03:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:29.650 ************************************ 00:05:29.650 START TEST skip_rpc 00:05:29.650 ************************************ 00:05:29.650 03:15:52 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:29.650 * Looking for test storage... 00:05:29.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:29.650 03:15:52 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:29.650 03:15:52 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:29.650 03:15:52 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:29.650 03:15:52 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.650 03:15:52 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:29.650 03:15:52 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.650 03:15:52 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:29.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.650 --rc genhtml_branch_coverage=1 00:05:29.650 --rc genhtml_function_coverage=1 00:05:29.650 --rc genhtml_legend=1 00:05:29.650 --rc geninfo_all_blocks=1 00:05:29.650 --rc geninfo_unexecuted_blocks=1 00:05:29.650 00:05:29.650 ' 00:05:29.650 03:15:52 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:29.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.650 --rc genhtml_branch_coverage=1 00:05:29.650 --rc genhtml_function_coverage=1 00:05:29.650 --rc genhtml_legend=1 00:05:29.650 --rc geninfo_all_blocks=1 00:05:29.650 --rc geninfo_unexecuted_blocks=1 00:05:29.650 00:05:29.650 ' 00:05:29.650 03:15:52 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:29.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.650 --rc genhtml_branch_coverage=1 00:05:29.650 --rc genhtml_function_coverage=1 00:05:29.650 --rc genhtml_legend=1 00:05:29.650 --rc geninfo_all_blocks=1 00:05:29.650 --rc geninfo_unexecuted_blocks=1 00:05:29.650 00:05:29.650 ' 00:05:29.650 03:15:52 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:29.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.650 --rc genhtml_branch_coverage=1 00:05:29.650 --rc genhtml_function_coverage=1 00:05:29.650 --rc genhtml_legend=1 00:05:29.650 --rc geninfo_all_blocks=1 00:05:29.650 --rc geninfo_unexecuted_blocks=1 00:05:29.650 00:05:29.650 ' 00:05:29.650 03:15:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:29.650 03:15:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:29.650 03:15:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:29.650 03:15:52 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.650 03:15:52 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.650 03:15:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.650 ************************************ 00:05:29.650 START TEST skip_rpc 00:05:29.650 ************************************ 00:05:29.650 03:15:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:29.650 03:15:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58046 00:05:29.650 03:15:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:29.650 03:15:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.650 03:15:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:29.650 [2024-11-05 03:15:52.951205] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:05:29.650 [2024-11-05 03:15:52.951685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58046 ] 00:05:29.650 [2024-11-05 03:15:53.144532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.910 [2024-11-05 03:15:53.299578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58046 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 58046 ']' 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 58046 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58046 00:05:35.183 killing process with pid 58046 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58046' 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 58046 00:05:35.183 03:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 58046 00:05:37.089 ************************************ 00:05:37.089 END TEST skip_rpc 00:05:37.089 ************************************ 00:05:37.089 00:05:37.089 real 0m7.736s 00:05:37.089 user 0m7.072s 00:05:37.089 sys 0m0.574s 00:05:37.089 03:16:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:37.089 03:16:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.089 03:16:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:37.089 03:16:00 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:37.089 03:16:00 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:37.089 03:16:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.089 ************************************ 00:05:37.089 START TEST skip_rpc_with_json 00:05:37.089 ************************************ 00:05:37.089 03:16:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:37.089 03:16:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:37.089 03:16:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58150 00:05:37.089 03:16:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.089 03:16:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.089 03:16:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58150 00:05:37.089 03:16:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 58150 ']' 00:05:37.089 03:16:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.089 03:16:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:37.089 03:16:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.090 03:16:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:37.090 03:16:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.349 [2024-11-05 03:16:00.751720] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:05:37.349 [2024-11-05 03:16:00.751864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58150 ] 00:05:37.608 [2024-11-05 03:16:00.935542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.608 [2024-11-05 03:16:01.081634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.545 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.545 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:38.545 03:16:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:38.545 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.545 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.545 [2024-11-05 03:16:02.091819] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:38.545 request: 00:05:38.545 { 00:05:38.545 "trtype": "tcp", 00:05:38.545 "method": "nvmf_get_transports", 00:05:38.545 "req_id": 1 00:05:38.545 } 00:05:38.545 Got JSON-RPC error response 00:05:38.545 response: 00:05:38.545 { 00:05:38.545 "code": -19, 00:05:38.545 "message": "No such device" 00:05:38.545 } 00:05:38.545 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:38.545 03:16:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:38.545 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.545 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.545 [2024-11-05 03:16:02.107923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.545 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.545 03:16:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:38.545 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.545 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.805 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.805 03:16:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:38.805 { 00:05:38.805 "subsystems": [ 00:05:38.805 { 00:05:38.805 "subsystem": "fsdev", 00:05:38.805 "config": [ 00:05:38.805 { 00:05:38.805 "method": "fsdev_set_opts", 00:05:38.805 "params": { 00:05:38.805 "fsdev_io_pool_size": 65535, 00:05:38.805 "fsdev_io_cache_size": 256 00:05:38.805 } 00:05:38.805 } 00:05:38.805 ] 00:05:38.805 }, 00:05:38.805 { 00:05:38.805 "subsystem": "keyring", 00:05:38.805 "config": [] 00:05:38.805 }, 00:05:38.805 { 00:05:38.805 "subsystem": "iobuf", 00:05:38.805 "config": [ 00:05:38.805 { 00:05:38.805 "method": "iobuf_set_options", 00:05:38.805 "params": { 00:05:38.805 "small_pool_count": 8192, 00:05:38.805 "large_pool_count": 1024, 00:05:38.805 "small_bufsize": 8192, 00:05:38.805 "large_bufsize": 135168, 00:05:38.805 "enable_numa": false 00:05:38.805 } 00:05:38.805 } 00:05:38.805 ] 00:05:38.805 }, 00:05:38.805 { 00:05:38.805 "subsystem": "sock", 00:05:38.805 "config": [ 00:05:38.805 { 00:05:38.805 "method": "sock_set_default_impl", 00:05:38.805 "params": { 00:05:38.805 "impl_name": "posix" 00:05:38.805 } 00:05:38.805 }, 00:05:38.805 { 00:05:38.805 "method": "sock_impl_set_options", 00:05:38.805 "params": { 00:05:38.805 "impl_name": "ssl", 00:05:38.805 "recv_buf_size": 4096, 00:05:38.805 "send_buf_size": 4096, 00:05:38.805 "enable_recv_pipe": true, 00:05:38.805 "enable_quickack": false, 00:05:38.805 "enable_placement_id": 0, 00:05:38.805 "enable_zerocopy_send_server": true, 00:05:38.805 "enable_zerocopy_send_client": false, 00:05:38.805 "zerocopy_threshold": 0, 00:05:38.805 "tls_version": 0, 00:05:38.805 "enable_ktls": false 00:05:38.805 } 00:05:38.805 }, 00:05:38.805 { 00:05:38.805 "method": "sock_impl_set_options", 00:05:38.805 "params": { 00:05:38.805 "impl_name": "posix", 00:05:38.805 "recv_buf_size": 2097152, 00:05:38.805 "send_buf_size": 2097152, 00:05:38.805 "enable_recv_pipe": true, 00:05:38.805 "enable_quickack": false, 00:05:38.805 "enable_placement_id": 0, 00:05:38.805 "enable_zerocopy_send_server": true, 00:05:38.805 "enable_zerocopy_send_client": false, 00:05:38.805 "zerocopy_threshold": 0, 00:05:38.805 "tls_version": 0, 00:05:38.805 "enable_ktls": false 00:05:38.805 } 00:05:38.805 } 00:05:38.805 ] 00:05:38.805 }, 00:05:38.805 { 00:05:38.805 "subsystem": "vmd", 00:05:38.805 "config": [] 00:05:38.805 }, 00:05:38.805 { 00:05:38.805 "subsystem": "accel", 00:05:38.805 "config": [ 00:05:38.805 { 00:05:38.805 "method": "accel_set_options", 00:05:38.805 "params": { 00:05:38.805 "small_cache_size": 128, 00:05:38.805 "large_cache_size": 16, 00:05:38.805 "task_count": 2048, 00:05:38.805 "sequence_count": 2048, 00:05:38.805 "buf_count": 2048 00:05:38.805 } 00:05:38.805 } 00:05:38.805 ] 00:05:38.805 }, 00:05:38.805 { 00:05:38.805 "subsystem": "bdev", 00:05:38.805 "config": [ 00:05:38.805 { 00:05:38.805 "method": "bdev_set_options", 00:05:38.805 "params": { 00:05:38.805 "bdev_io_pool_size": 65535, 00:05:38.805 "bdev_io_cache_size": 256, 00:05:38.805 "bdev_auto_examine": true, 00:05:38.805 "iobuf_small_cache_size": 128, 00:05:38.805 "iobuf_large_cache_size": 16 00:05:38.805 } 00:05:38.805 }, 00:05:38.805 { 00:05:38.805 "method": "bdev_raid_set_options", 00:05:38.805 "params": { 00:05:38.805 "process_window_size_kb": 1024, 00:05:38.805 "process_max_bandwidth_mb_sec": 0 00:05:38.805 } 00:05:38.805 }, 00:05:38.805 { 00:05:38.805 "method": "bdev_iscsi_set_options", 00:05:38.805 "params": { 00:05:38.805 "timeout_sec": 30 00:05:38.805 } 00:05:38.805 }, 00:05:38.805 { 00:05:38.805 "method": "bdev_nvme_set_options", 00:05:38.805 "params": { 00:05:38.805 "action_on_timeout": "none", 00:05:38.805 "timeout_us": 0, 00:05:38.805 "timeout_admin_us": 0, 00:05:38.805 "keep_alive_timeout_ms": 10000, 00:05:38.805 "arbitration_burst": 0, 00:05:38.805 "low_priority_weight": 0, 00:05:38.805 "medium_priority_weight": 0, 00:05:38.805 "high_priority_weight": 0, 00:05:38.805 "nvme_adminq_poll_period_us": 10000, 00:05:38.805 "nvme_ioq_poll_period_us": 0, 00:05:38.805 "io_queue_requests": 0, 00:05:38.805 "delay_cmd_submit": true, 00:05:38.805 "transport_retry_count": 4, 00:05:38.805 "bdev_retry_count": 3, 00:05:38.805 "transport_ack_timeout": 0, 00:05:38.805 "ctrlr_loss_timeout_sec": 0, 00:05:38.805 "reconnect_delay_sec": 0, 00:05:38.805 "fast_io_fail_timeout_sec": 0, 00:05:38.805 "disable_auto_failback": false, 00:05:38.805 "generate_uuids": false, 00:05:38.805 "transport_tos": 0, 00:05:38.805 "nvme_error_stat": false, 00:05:38.805 "rdma_srq_size": 0, 00:05:38.805 "io_path_stat": false, 00:05:38.805 "allow_accel_sequence": false, 00:05:38.805 "rdma_max_cq_size": 0, 00:05:38.806 "rdma_cm_event_timeout_ms": 0, 00:05:38.806 "dhchap_digests": [ 00:05:38.806 "sha256", 00:05:38.806 "sha384", 00:05:38.806 "sha512" 00:05:38.806 ], 00:05:38.806 "dhchap_dhgroups": [ 00:05:38.806 "null", 00:05:38.806 "ffdhe2048", 00:05:38.806 "ffdhe3072", 00:05:38.806 "ffdhe4096", 00:05:38.806 "ffdhe6144", 00:05:38.806 "ffdhe8192" 00:05:38.806 ] 00:05:38.806 } 00:05:38.806 }, 00:05:38.806 { 00:05:38.806 "method": "bdev_nvme_set_hotplug", 00:05:38.806 "params": { 00:05:38.806 "period_us": 100000, 00:05:38.806 "enable": false 00:05:38.806 } 00:05:38.806 }, 00:05:38.806 { 00:05:38.806 "method": "bdev_wait_for_examine" 00:05:38.806 } 00:05:38.806 ] 00:05:38.806 }, 00:05:38.806 { 00:05:38.806 "subsystem": "scsi", 00:05:38.806 "config": null 00:05:38.806 }, 00:05:38.806 { 00:05:38.806 "subsystem": "scheduler", 00:05:38.806 "config": [ 00:05:38.806 { 00:05:38.806 "method": "framework_set_scheduler", 00:05:38.806 "params": { 00:05:38.806 "name": "static" 00:05:38.806 } 00:05:38.806 } 00:05:38.806 ] 00:05:38.806 }, 00:05:38.806 { 00:05:38.806 "subsystem": "vhost_scsi", 00:05:38.806 "config": [] 00:05:38.806 }, 00:05:38.806 { 00:05:38.806 "subsystem": "vhost_blk", 00:05:38.806 "config": [] 00:05:38.806 }, 00:05:38.806 { 00:05:38.806 "subsystem": "ublk", 00:05:38.806 "config": [] 00:05:38.806 }, 00:05:38.806 { 00:05:38.806 "subsystem": "nbd", 00:05:38.806 "config": [] 00:05:38.806 }, 00:05:38.806 { 00:05:38.806 "subsystem": "nvmf", 00:05:38.806 "config": [ 00:05:38.806 { 00:05:38.806 "method": "nvmf_set_config", 00:05:38.806 "params": { 00:05:38.806 "discovery_filter": "match_any", 00:05:38.806 "admin_cmd_passthru": { 00:05:38.806 "identify_ctrlr": false 00:05:38.806 }, 00:05:38.806 "dhchap_digests": [ 00:05:38.806 "sha256", 00:05:38.806 "sha384", 00:05:38.806 "sha512" 00:05:38.806 ], 00:05:38.806 "dhchap_dhgroups": [ 00:05:38.806 "null", 00:05:38.806 "ffdhe2048", 00:05:38.806 "ffdhe3072", 00:05:38.806 "ffdhe4096", 00:05:38.806 "ffdhe6144", 00:05:38.806 "ffdhe8192" 00:05:38.806 ] 00:05:38.806 } 00:05:38.806 }, 00:05:38.806 { 00:05:38.806 "method": "nvmf_set_max_subsystems", 00:05:38.806 "params": { 00:05:38.806 "max_subsystems": 1024 00:05:38.806 } 00:05:38.806 }, 00:05:38.806 { 00:05:38.806 "method": "nvmf_set_crdt", 00:05:38.806 "params": { 00:05:38.806 "crdt1": 0, 00:05:38.806 "crdt2": 0, 00:05:38.806 "crdt3": 0 00:05:38.806 } 00:05:38.806 }, 00:05:38.806 { 00:05:38.806 "method": "nvmf_create_transport", 00:05:38.806 "params": { 00:05:38.806 "trtype": "TCP", 00:05:38.806 "max_queue_depth": 128, 00:05:38.806 "max_io_qpairs_per_ctrlr": 127, 00:05:38.806 "in_capsule_data_size": 4096, 00:05:38.806 "max_io_size": 131072, 00:05:38.806 "io_unit_size": 131072, 00:05:38.806 "max_aq_depth": 128, 00:05:38.806 "num_shared_buffers": 511, 00:05:38.806 "buf_cache_size": 4294967295, 00:05:38.806 "dif_insert_or_strip": false, 00:05:38.806 "zcopy": false, 00:05:38.806 "c2h_success": true, 00:05:38.806 "sock_priority": 0, 00:05:38.806 "abort_timeout_sec": 1, 00:05:38.806 "ack_timeout": 0, 00:05:38.806 "data_wr_pool_size": 0 00:05:38.806 } 00:05:38.806 } 00:05:38.806 ] 00:05:38.806 }, 00:05:38.806 { 00:05:38.806 "subsystem": "iscsi", 00:05:38.806 "config": [ 00:05:38.806 { 00:05:38.806 "method": "iscsi_set_options", 00:05:38.806 "params": { 00:05:38.806 "node_base": "iqn.2016-06.io.spdk", 00:05:38.806 "max_sessions": 128, 00:05:38.806 "max_connections_per_session": 2, 00:05:38.806 "max_queue_depth": 64, 00:05:38.806 "default_time2wait": 2, 00:05:38.806 "default_time2retain": 20, 00:05:38.806 "first_burst_length": 8192, 00:05:38.806 "immediate_data": true, 00:05:38.806 "allow_duplicated_isid": false, 00:05:38.806 "error_recovery_level": 0, 00:05:38.806 "nop_timeout": 60, 00:05:38.806 "nop_in_interval": 30, 00:05:38.806 "disable_chap": false, 00:05:38.806 "require_chap": false, 00:05:38.806 "mutual_chap": false, 00:05:38.806 "chap_group": 0, 00:05:38.806 "max_large_datain_per_connection": 64, 00:05:38.806 "max_r2t_per_connection": 4, 00:05:38.806 "pdu_pool_size": 36864, 00:05:38.806 "immediate_data_pool_size": 16384, 00:05:38.806 "data_out_pool_size": 2048 00:05:38.806 } 00:05:38.806 } 00:05:38.806 ] 00:05:38.806 } 00:05:38.806 ] 00:05:38.806 } 00:05:38.806 03:16:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:38.806 03:16:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58150 00:05:38.806 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58150 ']' 00:05:38.806 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58150 00:05:38.806 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:38.806 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:38.806 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58150 00:05:38.806 killing process with pid 58150 00:05:38.806 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:38.806 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:38.806 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58150' 00:05:38.806 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58150 00:05:38.806 03:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58150 00:05:42.094 03:16:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58206 00:05:42.094 03:16:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:42.094 03:16:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:47.369 03:16:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58206 00:05:47.369 03:16:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58206 ']' 00:05:47.369 03:16:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58206 00:05:47.369 03:16:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:47.369 03:16:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:47.369 03:16:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58206 00:05:47.369 killing process with pid 58206 00:05:47.369 03:16:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:47.369 03:16:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:47.369 03:16:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58206' 00:05:47.369 03:16:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58206 00:05:47.369 03:16:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58206 00:05:49.274 03:16:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:49.274 03:16:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:49.274 ************************************ 00:05:49.274 END TEST skip_rpc_with_json 00:05:49.274 ************************************ 00:05:49.274 00:05:49.274 real 0m12.089s 00:05:49.274 user 0m11.194s 00:05:49.274 sys 0m1.214s 00:05:49.274 03:16:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:49.274 03:16:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.274 03:16:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:49.274 03:16:12 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:49.274 03:16:12 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:49.274 03:16:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.275 ************************************ 00:05:49.275 START TEST skip_rpc_with_delay 00:05:49.275 ************************************ 00:05:49.275 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:49.275 03:16:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:49.275 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:49.275 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:49.275 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.275 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.275 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.275 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.275 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.275 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.275 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.275 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:49.275 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:49.533 [2024-11-05 03:16:12.919959] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:49.533 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:49.533 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.533 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:49.533 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.533 00:05:49.533 real 0m0.202s 00:05:49.533 user 0m0.100s 00:05:49.534 sys 0m0.100s 00:05:49.534 ************************************ 00:05:49.534 END TEST skip_rpc_with_delay 00:05:49.534 ************************************ 00:05:49.534 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:49.534 03:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:49.534 03:16:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:49.534 03:16:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:49.534 03:16:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:49.534 03:16:13 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:49.534 03:16:13 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:49.534 03:16:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.534 ************************************ 00:05:49.534 START TEST exit_on_failed_rpc_init 00:05:49.534 ************************************ 00:05:49.534 03:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:49.534 03:16:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58345 00:05:49.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.534 03:16:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58345 00:05:49.534 03:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 58345 ']' 00:05:49.534 03:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.534 03:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:49.534 03:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.534 03:16:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.534 03:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:49.534 03:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.792 [2024-11-05 03:16:13.186724] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:05:49.792 [2024-11-05 03:16:13.187073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58345 ] 00:05:49.792 [2024-11-05 03:16:13.371268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.051 [2024-11-05 03:16:13.524856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:51.425 03:16:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:51.425 [2024-11-05 03:16:14.728584] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:05:51.425 [2024-11-05 03:16:14.728751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58370 ] 00:05:51.425 [2024-11-05 03:16:14.919051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.684 [2024-11-05 03:16:15.039007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.684 [2024-11-05 03:16:15.039108] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:51.684 [2024-11-05 03:16:15.039125] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:51.684 [2024-11-05 03:16:15.039151] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58345 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 58345 ']' 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 58345 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58345 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58345' 00:05:51.943 killing process with pid 58345 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 58345 00:05:51.943 03:16:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 58345 00:05:54.477 00:05:54.477 real 0m4.923s 00:05:54.477 user 0m5.131s 00:05:54.477 sys 0m0.798s 00:05:54.477 ************************************ 00:05:54.477 END TEST exit_on_failed_rpc_init 00:05:54.477 ************************************ 00:05:54.477 03:16:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.477 03:16:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:54.736 03:16:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.736 00:05:54.736 real 0m25.493s 00:05:54.736 user 0m23.712s 00:05:54.736 sys 0m3.017s 00:05:54.736 03:16:18 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.736 03:16:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.736 ************************************ 00:05:54.736 END TEST skip_rpc 00:05:54.736 ************************************ 00:05:54.736 03:16:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:54.736 03:16:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.736 03:16:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.736 03:16:18 -- common/autotest_common.sh@10 -- # set +x 00:05:54.736 ************************************ 00:05:54.736 START TEST rpc_client 00:05:54.736 ************************************ 00:05:54.736 03:16:18 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:54.736 * Looking for test storage... 00:05:54.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:54.736 03:16:18 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:54.736 03:16:18 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:54.736 03:16:18 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:54.996 03:16:18 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.996 03:16:18 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:54.996 03:16:18 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.996 03:16:18 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:54.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.996 --rc genhtml_branch_coverage=1 00:05:54.996 --rc genhtml_function_coverage=1 00:05:54.996 --rc genhtml_legend=1 00:05:54.996 --rc geninfo_all_blocks=1 00:05:54.996 --rc geninfo_unexecuted_blocks=1 00:05:54.996 00:05:54.996 ' 00:05:54.996 03:16:18 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:54.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.996 --rc genhtml_branch_coverage=1 00:05:54.996 --rc genhtml_function_coverage=1 00:05:54.996 --rc genhtml_legend=1 00:05:54.996 --rc geninfo_all_blocks=1 00:05:54.996 --rc geninfo_unexecuted_blocks=1 00:05:54.996 00:05:54.996 ' 00:05:54.996 03:16:18 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:54.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.996 --rc genhtml_branch_coverage=1 00:05:54.996 --rc genhtml_function_coverage=1 00:05:54.996 --rc genhtml_legend=1 00:05:54.996 --rc geninfo_all_blocks=1 00:05:54.996 --rc geninfo_unexecuted_blocks=1 00:05:54.996 00:05:54.996 ' 00:05:54.996 03:16:18 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:54.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.996 --rc genhtml_branch_coverage=1 00:05:54.996 --rc genhtml_function_coverage=1 00:05:54.996 --rc genhtml_legend=1 00:05:54.996 --rc geninfo_all_blocks=1 00:05:54.996 --rc geninfo_unexecuted_blocks=1 00:05:54.996 00:05:54.996 ' 00:05:54.996 03:16:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:54.996 OK 00:05:54.996 03:16:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:54.996 00:05:54.996 real 0m0.323s 00:05:54.996 user 0m0.167s 00:05:54.996 sys 0m0.174s 00:05:54.996 03:16:18 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.996 03:16:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:54.996 ************************************ 00:05:54.996 END TEST rpc_client 00:05:54.996 ************************************ 00:05:54.996 03:16:18 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:54.996 03:16:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.996 03:16:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.996 03:16:18 -- common/autotest_common.sh@10 -- # set +x 00:05:54.996 ************************************ 00:05:54.996 START TEST json_config 00:05:54.996 ************************************ 00:05:54.996 03:16:18 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:55.256 03:16:18 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:55.256 03:16:18 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:55.256 03:16:18 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:55.256 03:16:18 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:55.256 03:16:18 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.256 03:16:18 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.256 03:16:18 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.256 03:16:18 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.256 03:16:18 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.256 03:16:18 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.256 03:16:18 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.256 03:16:18 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.256 03:16:18 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.256 03:16:18 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.256 03:16:18 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.256 03:16:18 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:55.256 03:16:18 json_config -- scripts/common.sh@345 -- # : 1 00:05:55.256 03:16:18 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.256 03:16:18 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.256 03:16:18 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:55.256 03:16:18 json_config -- scripts/common.sh@353 -- # local d=1 00:05:55.256 03:16:18 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.256 03:16:18 json_config -- scripts/common.sh@355 -- # echo 1 00:05:55.256 03:16:18 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.256 03:16:18 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:55.256 03:16:18 json_config -- scripts/common.sh@353 -- # local d=2 00:05:55.256 03:16:18 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.256 03:16:18 json_config -- scripts/common.sh@355 -- # echo 2 00:05:55.256 03:16:18 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.256 03:16:18 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.256 03:16:18 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.256 03:16:18 json_config -- scripts/common.sh@368 -- # return 0 00:05:55.256 03:16:18 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.256 03:16:18 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:55.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.257 --rc genhtml_branch_coverage=1 00:05:55.257 --rc genhtml_function_coverage=1 00:05:55.257 --rc genhtml_legend=1 00:05:55.257 --rc geninfo_all_blocks=1 00:05:55.257 --rc geninfo_unexecuted_blocks=1 00:05:55.257 00:05:55.257 ' 00:05:55.257 03:16:18 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:55.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.257 --rc genhtml_branch_coverage=1 00:05:55.257 --rc genhtml_function_coverage=1 00:05:55.257 --rc genhtml_legend=1 00:05:55.257 --rc geninfo_all_blocks=1 00:05:55.257 --rc geninfo_unexecuted_blocks=1 00:05:55.257 00:05:55.257 ' 00:05:55.257 03:16:18 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:55.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.257 --rc genhtml_branch_coverage=1 00:05:55.257 --rc genhtml_function_coverage=1 00:05:55.257 --rc genhtml_legend=1 00:05:55.257 --rc geninfo_all_blocks=1 00:05:55.257 --rc geninfo_unexecuted_blocks=1 00:05:55.257 00:05:55.257 ' 00:05:55.257 03:16:18 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:55.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.257 --rc genhtml_branch_coverage=1 00:05:55.257 --rc genhtml_function_coverage=1 00:05:55.257 --rc genhtml_legend=1 00:05:55.257 --rc geninfo_all_blocks=1 00:05:55.257 --rc geninfo_unexecuted_blocks=1 00:05:55.257 00:05:55.257 ' 00:05:55.257 03:16:18 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:55c5990d-2614-4b15-ace8-ffb5cf34a72e 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=55c5990d-2614-4b15-ace8-ffb5cf34a72e 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:55.257 03:16:18 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.257 03:16:18 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.257 03:16:18 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.257 03:16:18 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.257 03:16:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.257 03:16:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.257 03:16:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.257 03:16:18 json_config -- paths/export.sh@5 -- # export PATH 00:05:55.257 03:16:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@51 -- # : 0 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.257 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.257 03:16:18 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.257 03:16:18 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:55.257 03:16:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:55.257 03:16:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:55.257 03:16:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:55.257 03:16:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:55.257 03:16:18 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:55.257 WARNING: No tests are enabled so not running JSON configuration tests 00:05:55.257 03:16:18 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:55.257 00:05:55.257 real 0m0.231s 00:05:55.257 user 0m0.123s 00:05:55.257 sys 0m0.111s 00:05:55.257 03:16:18 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.257 03:16:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.257 ************************************ 00:05:55.257 END TEST json_config 00:05:55.257 ************************************ 00:05:55.257 03:16:18 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:55.257 03:16:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:55.257 03:16:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.257 03:16:18 -- common/autotest_common.sh@10 -- # set +x 00:05:55.517 ************************************ 00:05:55.517 START TEST json_config_extra_key 00:05:55.517 ************************************ 00:05:55.517 03:16:18 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:55.517 03:16:18 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:55.517 03:16:18 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:55.517 03:16:18 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:55.517 03:16:19 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.517 03:16:19 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:55.517 03:16:19 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.517 03:16:19 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:55.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.517 --rc genhtml_branch_coverage=1 00:05:55.517 --rc genhtml_function_coverage=1 00:05:55.517 --rc genhtml_legend=1 00:05:55.517 --rc geninfo_all_blocks=1 00:05:55.517 --rc geninfo_unexecuted_blocks=1 00:05:55.517 00:05:55.517 ' 00:05:55.517 03:16:19 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:55.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.517 --rc genhtml_branch_coverage=1 00:05:55.517 --rc genhtml_function_coverage=1 00:05:55.517 --rc genhtml_legend=1 00:05:55.517 --rc geninfo_all_blocks=1 00:05:55.517 --rc geninfo_unexecuted_blocks=1 00:05:55.517 00:05:55.517 ' 00:05:55.517 03:16:19 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:55.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.517 --rc genhtml_branch_coverage=1 00:05:55.517 --rc genhtml_function_coverage=1 00:05:55.517 --rc genhtml_legend=1 00:05:55.517 --rc geninfo_all_blocks=1 00:05:55.517 --rc geninfo_unexecuted_blocks=1 00:05:55.517 00:05:55.517 ' 00:05:55.517 03:16:19 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:55.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.517 --rc genhtml_branch_coverage=1 00:05:55.517 --rc genhtml_function_coverage=1 00:05:55.517 --rc genhtml_legend=1 00:05:55.517 --rc geninfo_all_blocks=1 00:05:55.517 --rc geninfo_unexecuted_blocks=1 00:05:55.517 00:05:55.517 ' 00:05:55.517 03:16:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:55c5990d-2614-4b15-ace8-ffb5cf34a72e 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=55c5990d-2614-4b15-ace8-ffb5cf34a72e 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:55.518 03:16:19 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.518 03:16:19 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.518 03:16:19 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.518 03:16:19 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.518 03:16:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.518 03:16:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.518 03:16:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.518 03:16:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:55.518 03:16:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.518 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.518 03:16:19 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.518 03:16:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:55.518 03:16:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:55.518 03:16:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:55.518 03:16:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:55.518 03:16:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:55.518 03:16:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:55.518 03:16:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:55.518 03:16:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:55.518 03:16:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:55.518 03:16:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:55.518 INFO: launching applications... 00:05:55.518 03:16:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:55.518 03:16:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:55.518 03:16:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:55.518 03:16:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:55.518 03:16:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:55.518 03:16:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:55.518 03:16:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:55.518 03:16:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.518 03:16:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.518 03:16:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58584 00:05:55.518 Waiting for target to run... 00:05:55.518 03:16:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:55.518 03:16:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58584 /var/tmp/spdk_tgt.sock 00:05:55.518 03:16:19 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 58584 ']' 00:05:55.518 03:16:19 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:55.518 03:16:19 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:55.518 03:16:19 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:55.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:55.518 03:16:19 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:55.518 03:16:19 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:55.518 03:16:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:55.778 [2024-11-05 03:16:19.189237] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:05:55.778 [2024-11-05 03:16:19.189383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58584 ] 00:05:56.038 [2024-11-05 03:16:19.587476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.296 [2024-11-05 03:16:19.722387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.234 03:16:20 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:57.234 00:05:57.234 03:16:20 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:57.234 03:16:20 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:57.235 INFO: shutting down applications... 00:05:57.235 03:16:20 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:57.235 03:16:20 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:57.235 03:16:20 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:57.235 03:16:20 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:57.235 03:16:20 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58584 ]] 00:05:57.235 03:16:20 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58584 00:05:57.235 03:16:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:57.235 03:16:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.235 03:16:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58584 00:05:57.235 03:16:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:57.494 03:16:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:57.494 03:16:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.494 03:16:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58584 00:05:57.494 03:16:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.063 03:16:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.063 03:16:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.063 03:16:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58584 00:05:58.063 03:16:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.632 03:16:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.632 03:16:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.632 03:16:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58584 00:05:58.632 03:16:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.200 03:16:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.200 03:16:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.200 03:16:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58584 00:05:59.200 03:16:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.768 03:16:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.768 03:16:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.768 03:16:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58584 00:05:59.768 03:16:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:00.027 03:16:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:00.027 03:16:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.027 03:16:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58584 00:06:00.027 03:16:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:00.596 03:16:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:00.596 03:16:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.596 03:16:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58584 00:06:00.596 03:16:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:00.596 03:16:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:00.596 03:16:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:00.596 SPDK target shutdown done 00:06:00.596 03:16:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:00.596 Success 00:06:00.596 03:16:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:00.596 00:06:00.596 real 0m5.255s 00:06:00.596 user 0m4.495s 00:06:00.596 sys 0m0.688s 00:06:00.596 03:16:24 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.596 03:16:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:00.596 ************************************ 00:06:00.596 END TEST json_config_extra_key 00:06:00.596 ************************************ 00:06:00.596 03:16:24 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:00.596 03:16:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.596 03:16:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.596 03:16:24 -- common/autotest_common.sh@10 -- # set +x 00:06:00.856 ************************************ 00:06:00.856 START TEST alias_rpc 00:06:00.856 ************************************ 00:06:00.856 03:16:24 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:00.856 * Looking for test storage... 00:06:00.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:00.856 03:16:24 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:00.856 03:16:24 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:00.856 03:16:24 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:00.856 03:16:24 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:00.856 03:16:24 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.857 03:16:24 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:00.857 03:16:24 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:00.857 03:16:24 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.857 03:16:24 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:00.857 03:16:24 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.857 03:16:24 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.857 03:16:24 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.857 03:16:24 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:00.857 03:16:24 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.857 03:16:24 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:00.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.857 --rc genhtml_branch_coverage=1 00:06:00.857 --rc genhtml_function_coverage=1 00:06:00.857 --rc genhtml_legend=1 00:06:00.857 --rc geninfo_all_blocks=1 00:06:00.857 --rc geninfo_unexecuted_blocks=1 00:06:00.857 00:06:00.857 ' 00:06:00.857 03:16:24 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:00.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.857 --rc genhtml_branch_coverage=1 00:06:00.857 --rc genhtml_function_coverage=1 00:06:00.857 --rc genhtml_legend=1 00:06:00.857 --rc geninfo_all_blocks=1 00:06:00.857 --rc geninfo_unexecuted_blocks=1 00:06:00.857 00:06:00.857 ' 00:06:00.857 03:16:24 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:00.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.857 --rc genhtml_branch_coverage=1 00:06:00.857 --rc genhtml_function_coverage=1 00:06:00.857 --rc genhtml_legend=1 00:06:00.857 --rc geninfo_all_blocks=1 00:06:00.857 --rc geninfo_unexecuted_blocks=1 00:06:00.857 00:06:00.857 ' 00:06:00.857 03:16:24 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:00.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.857 --rc genhtml_branch_coverage=1 00:06:00.857 --rc genhtml_function_coverage=1 00:06:00.857 --rc genhtml_legend=1 00:06:00.857 --rc geninfo_all_blocks=1 00:06:00.857 --rc geninfo_unexecuted_blocks=1 00:06:00.857 00:06:00.857 ' 00:06:00.857 03:16:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:00.857 03:16:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58702 00:06:00.857 03:16:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.857 03:16:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58702 00:06:00.857 03:16:24 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 58702 ']' 00:06:00.857 03:16:24 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.857 03:16:24 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.857 03:16:24 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.857 03:16:24 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.857 03:16:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.115 [2024-11-05 03:16:24.518087] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:06:01.115 [2024-11-05 03:16:24.518218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58702 ] 00:06:01.115 [2024-11-05 03:16:24.698439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.375 [2024-11-05 03:16:24.844168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.312 03:16:25 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:02.312 03:16:25 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:02.312 03:16:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:02.580 03:16:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58702 00:06:02.580 03:16:26 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 58702 ']' 00:06:02.580 03:16:26 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 58702 00:06:02.580 03:16:26 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:02.580 03:16:26 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.580 03:16:26 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58702 00:06:02.580 03:16:26 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:02.580 killing process with pid 58702 00:06:02.580 03:16:26 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:02.580 03:16:26 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58702' 00:06:02.580 03:16:26 alias_rpc -- common/autotest_common.sh@971 -- # kill 58702 00:06:02.580 03:16:26 alias_rpc -- common/autotest_common.sh@976 -- # wait 58702 00:06:05.873 00:06:05.873 real 0m4.647s 00:06:05.873 user 0m4.473s 00:06:05.873 sys 0m0.785s 00:06:05.873 03:16:28 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.873 ************************************ 00:06:05.873 END TEST alias_rpc 00:06:05.873 ************************************ 00:06:05.873 03:16:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.873 03:16:28 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:05.873 03:16:28 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:05.873 03:16:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:05.873 03:16:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.873 03:16:28 -- common/autotest_common.sh@10 -- # set +x 00:06:05.873 ************************************ 00:06:05.873 START TEST spdkcli_tcp 00:06:05.873 ************************************ 00:06:05.873 03:16:28 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:05.873 * Looking for test storage... 00:06:05.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.873 03:16:29 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:05.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.873 --rc genhtml_branch_coverage=1 00:06:05.873 --rc genhtml_function_coverage=1 00:06:05.873 --rc genhtml_legend=1 00:06:05.873 --rc geninfo_all_blocks=1 00:06:05.873 --rc geninfo_unexecuted_blocks=1 00:06:05.873 00:06:05.873 ' 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:05.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.873 --rc genhtml_branch_coverage=1 00:06:05.873 --rc genhtml_function_coverage=1 00:06:05.873 --rc genhtml_legend=1 00:06:05.873 --rc geninfo_all_blocks=1 00:06:05.873 --rc geninfo_unexecuted_blocks=1 00:06:05.873 00:06:05.873 ' 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:05.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.873 --rc genhtml_branch_coverage=1 00:06:05.873 --rc genhtml_function_coverage=1 00:06:05.873 --rc genhtml_legend=1 00:06:05.873 --rc geninfo_all_blocks=1 00:06:05.873 --rc geninfo_unexecuted_blocks=1 00:06:05.873 00:06:05.873 ' 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:05.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.873 --rc genhtml_branch_coverage=1 00:06:05.873 --rc genhtml_function_coverage=1 00:06:05.873 --rc genhtml_legend=1 00:06:05.873 --rc geninfo_all_blocks=1 00:06:05.873 --rc geninfo_unexecuted_blocks=1 00:06:05.873 00:06:05.873 ' 00:06:05.873 03:16:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:05.873 03:16:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:05.873 03:16:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:05.873 03:16:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:05.873 03:16:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:05.873 03:16:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:05.873 03:16:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.873 03:16:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58815 00:06:05.873 03:16:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:05.873 03:16:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58815 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58815 ']' 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:05.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:05.873 03:16:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.873 [2024-11-05 03:16:29.273185] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:06:05.873 [2024-11-05 03:16:29.273342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58815 ] 00:06:06.132 [2024-11-05 03:16:29.461595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.132 [2024-11-05 03:16:29.613082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.132 [2024-11-05 03:16:29.613128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.069 03:16:30 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:07.069 03:16:30 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:07.069 03:16:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58837 00:06:07.069 03:16:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:07.069 03:16:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:07.328 [ 00:06:07.328 "bdev_malloc_delete", 00:06:07.328 "bdev_malloc_create", 00:06:07.328 "bdev_null_resize", 00:06:07.328 "bdev_null_delete", 00:06:07.328 "bdev_null_create", 00:06:07.328 "bdev_nvme_cuse_unregister", 00:06:07.328 "bdev_nvme_cuse_register", 00:06:07.328 "bdev_opal_new_user", 00:06:07.328 "bdev_opal_set_lock_state", 00:06:07.328 "bdev_opal_delete", 00:06:07.328 "bdev_opal_get_info", 00:06:07.328 "bdev_opal_create", 00:06:07.328 "bdev_nvme_opal_revert", 00:06:07.328 "bdev_nvme_opal_init", 00:06:07.328 "bdev_nvme_send_cmd", 00:06:07.328 "bdev_nvme_set_keys", 00:06:07.328 "bdev_nvme_get_path_iostat", 00:06:07.328 "bdev_nvme_get_mdns_discovery_info", 00:06:07.328 "bdev_nvme_stop_mdns_discovery", 00:06:07.328 "bdev_nvme_start_mdns_discovery", 00:06:07.328 "bdev_nvme_set_multipath_policy", 00:06:07.328 "bdev_nvme_set_preferred_path", 00:06:07.328 "bdev_nvme_get_io_paths", 00:06:07.328 "bdev_nvme_remove_error_injection", 00:06:07.328 "bdev_nvme_add_error_injection", 00:06:07.328 "bdev_nvme_get_discovery_info", 00:06:07.328 "bdev_nvme_stop_discovery", 00:06:07.328 "bdev_nvme_start_discovery", 00:06:07.328 "bdev_nvme_get_controller_health_info", 00:06:07.328 "bdev_nvme_disable_controller", 00:06:07.328 "bdev_nvme_enable_controller", 00:06:07.328 "bdev_nvme_reset_controller", 00:06:07.328 "bdev_nvme_get_transport_statistics", 00:06:07.328 "bdev_nvme_apply_firmware", 00:06:07.328 "bdev_nvme_detach_controller", 00:06:07.328 "bdev_nvme_get_controllers", 00:06:07.328 "bdev_nvme_attach_controller", 00:06:07.328 "bdev_nvme_set_hotplug", 00:06:07.328 "bdev_nvme_set_options", 00:06:07.328 "bdev_passthru_delete", 00:06:07.328 "bdev_passthru_create", 00:06:07.328 "bdev_lvol_set_parent_bdev", 00:06:07.328 "bdev_lvol_set_parent", 00:06:07.328 "bdev_lvol_check_shallow_copy", 00:06:07.328 "bdev_lvol_start_shallow_copy", 00:06:07.328 "bdev_lvol_grow_lvstore", 00:06:07.328 "bdev_lvol_get_lvols", 00:06:07.328 "bdev_lvol_get_lvstores", 00:06:07.328 "bdev_lvol_delete", 00:06:07.328 "bdev_lvol_set_read_only", 00:06:07.328 "bdev_lvol_resize", 00:06:07.328 "bdev_lvol_decouple_parent", 00:06:07.328 "bdev_lvol_inflate", 00:06:07.328 "bdev_lvol_rename", 00:06:07.328 "bdev_lvol_clone_bdev", 00:06:07.328 "bdev_lvol_clone", 00:06:07.328 "bdev_lvol_snapshot", 00:06:07.328 "bdev_lvol_create", 00:06:07.328 "bdev_lvol_delete_lvstore", 00:06:07.328 "bdev_lvol_rename_lvstore", 00:06:07.328 "bdev_lvol_create_lvstore", 00:06:07.328 "bdev_raid_set_options", 00:06:07.328 "bdev_raid_remove_base_bdev", 00:06:07.328 "bdev_raid_add_base_bdev", 00:06:07.328 "bdev_raid_delete", 00:06:07.328 "bdev_raid_create", 00:06:07.328 "bdev_raid_get_bdevs", 00:06:07.328 "bdev_error_inject_error", 00:06:07.328 "bdev_error_delete", 00:06:07.328 "bdev_error_create", 00:06:07.328 "bdev_split_delete", 00:06:07.328 "bdev_split_create", 00:06:07.328 "bdev_delay_delete", 00:06:07.328 "bdev_delay_create", 00:06:07.328 "bdev_delay_update_latency", 00:06:07.328 "bdev_zone_block_delete", 00:06:07.328 "bdev_zone_block_create", 00:06:07.328 "blobfs_create", 00:06:07.328 "blobfs_detect", 00:06:07.328 "blobfs_set_cache_size", 00:06:07.328 "bdev_xnvme_delete", 00:06:07.328 "bdev_xnvme_create", 00:06:07.328 "bdev_aio_delete", 00:06:07.328 "bdev_aio_rescan", 00:06:07.328 "bdev_aio_create", 00:06:07.328 "bdev_ftl_set_property", 00:06:07.328 "bdev_ftl_get_properties", 00:06:07.328 "bdev_ftl_get_stats", 00:06:07.328 "bdev_ftl_unmap", 00:06:07.328 "bdev_ftl_unload", 00:06:07.328 "bdev_ftl_delete", 00:06:07.328 "bdev_ftl_load", 00:06:07.328 "bdev_ftl_create", 00:06:07.328 "bdev_virtio_attach_controller", 00:06:07.328 "bdev_virtio_scsi_get_devices", 00:06:07.328 "bdev_virtio_detach_controller", 00:06:07.328 "bdev_virtio_blk_set_hotplug", 00:06:07.328 "bdev_iscsi_delete", 00:06:07.328 "bdev_iscsi_create", 00:06:07.328 "bdev_iscsi_set_options", 00:06:07.328 "accel_error_inject_error", 00:06:07.328 "ioat_scan_accel_module", 00:06:07.328 "dsa_scan_accel_module", 00:06:07.329 "iaa_scan_accel_module", 00:06:07.329 "keyring_file_remove_key", 00:06:07.329 "keyring_file_add_key", 00:06:07.329 "keyring_linux_set_options", 00:06:07.329 "fsdev_aio_delete", 00:06:07.329 "fsdev_aio_create", 00:06:07.329 "iscsi_get_histogram", 00:06:07.329 "iscsi_enable_histogram", 00:06:07.329 "iscsi_set_options", 00:06:07.329 "iscsi_get_auth_groups", 00:06:07.329 "iscsi_auth_group_remove_secret", 00:06:07.329 "iscsi_auth_group_add_secret", 00:06:07.329 "iscsi_delete_auth_group", 00:06:07.329 "iscsi_create_auth_group", 00:06:07.329 "iscsi_set_discovery_auth", 00:06:07.329 "iscsi_get_options", 00:06:07.329 "iscsi_target_node_request_logout", 00:06:07.329 "iscsi_target_node_set_redirect", 00:06:07.329 "iscsi_target_node_set_auth", 00:06:07.329 "iscsi_target_node_add_lun", 00:06:07.329 "iscsi_get_stats", 00:06:07.329 "iscsi_get_connections", 00:06:07.329 "iscsi_portal_group_set_auth", 00:06:07.329 "iscsi_start_portal_group", 00:06:07.329 "iscsi_delete_portal_group", 00:06:07.329 "iscsi_create_portal_group", 00:06:07.329 "iscsi_get_portal_groups", 00:06:07.329 "iscsi_delete_target_node", 00:06:07.329 "iscsi_target_node_remove_pg_ig_maps", 00:06:07.329 "iscsi_target_node_add_pg_ig_maps", 00:06:07.329 "iscsi_create_target_node", 00:06:07.329 "iscsi_get_target_nodes", 00:06:07.329 "iscsi_delete_initiator_group", 00:06:07.329 "iscsi_initiator_group_remove_initiators", 00:06:07.329 "iscsi_initiator_group_add_initiators", 00:06:07.329 "iscsi_create_initiator_group", 00:06:07.329 "iscsi_get_initiator_groups", 00:06:07.329 "nvmf_set_crdt", 00:06:07.329 "nvmf_set_config", 00:06:07.329 "nvmf_set_max_subsystems", 00:06:07.329 "nvmf_stop_mdns_prr", 00:06:07.329 "nvmf_publish_mdns_prr", 00:06:07.329 "nvmf_subsystem_get_listeners", 00:06:07.329 "nvmf_subsystem_get_qpairs", 00:06:07.329 "nvmf_subsystem_get_controllers", 00:06:07.329 "nvmf_get_stats", 00:06:07.329 "nvmf_get_transports", 00:06:07.329 "nvmf_create_transport", 00:06:07.329 "nvmf_get_targets", 00:06:07.329 "nvmf_delete_target", 00:06:07.329 "nvmf_create_target", 00:06:07.329 "nvmf_subsystem_allow_any_host", 00:06:07.329 "nvmf_subsystem_set_keys", 00:06:07.329 "nvmf_subsystem_remove_host", 00:06:07.329 "nvmf_subsystem_add_host", 00:06:07.329 "nvmf_ns_remove_host", 00:06:07.329 "nvmf_ns_add_host", 00:06:07.329 "nvmf_subsystem_remove_ns", 00:06:07.329 "nvmf_subsystem_set_ns_ana_group", 00:06:07.329 "nvmf_subsystem_add_ns", 00:06:07.329 "nvmf_subsystem_listener_set_ana_state", 00:06:07.329 "nvmf_discovery_get_referrals", 00:06:07.329 "nvmf_discovery_remove_referral", 00:06:07.329 "nvmf_discovery_add_referral", 00:06:07.329 "nvmf_subsystem_remove_listener", 00:06:07.329 "nvmf_subsystem_add_listener", 00:06:07.329 "nvmf_delete_subsystem", 00:06:07.329 "nvmf_create_subsystem", 00:06:07.329 "nvmf_get_subsystems", 00:06:07.329 "env_dpdk_get_mem_stats", 00:06:07.329 "nbd_get_disks", 00:06:07.329 "nbd_stop_disk", 00:06:07.329 "nbd_start_disk", 00:06:07.329 "ublk_recover_disk", 00:06:07.329 "ublk_get_disks", 00:06:07.329 "ublk_stop_disk", 00:06:07.329 "ublk_start_disk", 00:06:07.329 "ublk_destroy_target", 00:06:07.329 "ublk_create_target", 00:06:07.329 "virtio_blk_create_transport", 00:06:07.329 "virtio_blk_get_transports", 00:06:07.329 "vhost_controller_set_coalescing", 00:06:07.329 "vhost_get_controllers", 00:06:07.329 "vhost_delete_controller", 00:06:07.329 "vhost_create_blk_controller", 00:06:07.329 "vhost_scsi_controller_remove_target", 00:06:07.329 "vhost_scsi_controller_add_target", 00:06:07.329 "vhost_start_scsi_controller", 00:06:07.329 "vhost_create_scsi_controller", 00:06:07.329 "thread_set_cpumask", 00:06:07.329 "scheduler_set_options", 00:06:07.329 "framework_get_governor", 00:06:07.329 "framework_get_scheduler", 00:06:07.329 "framework_set_scheduler", 00:06:07.329 "framework_get_reactors", 00:06:07.329 "thread_get_io_channels", 00:06:07.329 "thread_get_pollers", 00:06:07.329 "thread_get_stats", 00:06:07.329 "framework_monitor_context_switch", 00:06:07.329 "spdk_kill_instance", 00:06:07.329 "log_enable_timestamps", 00:06:07.329 "log_get_flags", 00:06:07.329 "log_clear_flag", 00:06:07.329 "log_set_flag", 00:06:07.329 "log_get_level", 00:06:07.329 "log_set_level", 00:06:07.329 "log_get_print_level", 00:06:07.329 "log_set_print_level", 00:06:07.329 "framework_enable_cpumask_locks", 00:06:07.329 "framework_disable_cpumask_locks", 00:06:07.329 "framework_wait_init", 00:06:07.329 "framework_start_init", 00:06:07.329 "scsi_get_devices", 00:06:07.329 "bdev_get_histogram", 00:06:07.329 "bdev_enable_histogram", 00:06:07.329 "bdev_set_qos_limit", 00:06:07.329 "bdev_set_qd_sampling_period", 00:06:07.329 "bdev_get_bdevs", 00:06:07.329 "bdev_reset_iostat", 00:06:07.329 "bdev_get_iostat", 00:06:07.329 "bdev_examine", 00:06:07.329 "bdev_wait_for_examine", 00:06:07.329 "bdev_set_options", 00:06:07.329 "accel_get_stats", 00:06:07.329 "accel_set_options", 00:06:07.329 "accel_set_driver", 00:06:07.329 "accel_crypto_key_destroy", 00:06:07.329 "accel_crypto_keys_get", 00:06:07.329 "accel_crypto_key_create", 00:06:07.329 "accel_assign_opc", 00:06:07.329 "accel_get_module_info", 00:06:07.329 "accel_get_opc_assignments", 00:06:07.329 "vmd_rescan", 00:06:07.329 "vmd_remove_device", 00:06:07.329 "vmd_enable", 00:06:07.329 "sock_get_default_impl", 00:06:07.329 "sock_set_default_impl", 00:06:07.329 "sock_impl_set_options", 00:06:07.329 "sock_impl_get_options", 00:06:07.329 "iobuf_get_stats", 00:06:07.329 "iobuf_set_options", 00:06:07.329 "keyring_get_keys", 00:06:07.329 "framework_get_pci_devices", 00:06:07.329 "framework_get_config", 00:06:07.329 "framework_get_subsystems", 00:06:07.329 "fsdev_set_opts", 00:06:07.329 "fsdev_get_opts", 00:06:07.329 "trace_get_info", 00:06:07.329 "trace_get_tpoint_group_mask", 00:06:07.329 "trace_disable_tpoint_group", 00:06:07.329 "trace_enable_tpoint_group", 00:06:07.329 "trace_clear_tpoint_mask", 00:06:07.329 "trace_set_tpoint_mask", 00:06:07.329 "notify_get_notifications", 00:06:07.329 "notify_get_types", 00:06:07.329 "spdk_get_version", 00:06:07.329 "rpc_get_methods" 00:06:07.329 ] 00:06:07.329 03:16:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:07.329 03:16:30 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.329 03:16:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.329 03:16:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:07.329 03:16:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58815 00:06:07.329 03:16:30 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58815 ']' 00:06:07.329 03:16:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58815 00:06:07.329 03:16:30 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:07.329 03:16:30 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:07.329 03:16:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58815 00:06:07.587 03:16:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:07.587 03:16:30 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:07.587 03:16:30 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58815' 00:06:07.587 killing process with pid 58815 00:06:07.587 03:16:30 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58815 00:06:07.587 03:16:30 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58815 00:06:10.119 00:06:10.119 real 0m4.665s 00:06:10.119 user 0m8.082s 00:06:10.119 sys 0m0.871s 00:06:10.119 03:16:33 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.119 03:16:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.119 ************************************ 00:06:10.119 END TEST spdkcli_tcp 00:06:10.119 ************************************ 00:06:10.119 03:16:33 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:10.119 03:16:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:10.119 03:16:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.119 03:16:33 -- common/autotest_common.sh@10 -- # set +x 00:06:10.119 ************************************ 00:06:10.119 START TEST dpdk_mem_utility 00:06:10.119 ************************************ 00:06:10.119 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:10.378 * Looking for test storage... 00:06:10.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.378 03:16:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:10.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.378 --rc genhtml_branch_coverage=1 00:06:10.378 --rc genhtml_function_coverage=1 00:06:10.378 --rc genhtml_legend=1 00:06:10.378 --rc geninfo_all_blocks=1 00:06:10.378 --rc geninfo_unexecuted_blocks=1 00:06:10.378 00:06:10.378 ' 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:10.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.378 --rc genhtml_branch_coverage=1 00:06:10.378 --rc genhtml_function_coverage=1 00:06:10.378 --rc genhtml_legend=1 00:06:10.378 --rc geninfo_all_blocks=1 00:06:10.378 --rc geninfo_unexecuted_blocks=1 00:06:10.378 00:06:10.378 ' 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:10.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.378 --rc genhtml_branch_coverage=1 00:06:10.378 --rc genhtml_function_coverage=1 00:06:10.378 --rc genhtml_legend=1 00:06:10.378 --rc geninfo_all_blocks=1 00:06:10.378 --rc geninfo_unexecuted_blocks=1 00:06:10.378 00:06:10.378 ' 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:10.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.378 --rc genhtml_branch_coverage=1 00:06:10.378 --rc genhtml_function_coverage=1 00:06:10.378 --rc genhtml_legend=1 00:06:10.378 --rc geninfo_all_blocks=1 00:06:10.378 --rc geninfo_unexecuted_blocks=1 00:06:10.378 00:06:10.378 ' 00:06:10.378 03:16:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:10.378 03:16:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58942 00:06:10.378 03:16:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:10.378 03:16:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58942 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58942 ']' 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:10.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:10.378 03:16:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:10.637 [2024-11-05 03:16:34.005276] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:06:10.637 [2024-11-05 03:16:34.005470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58942 ] 00:06:10.637 [2024-11-05 03:16:34.188177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.895 [2024-11-05 03:16:34.334343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.831 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:11.831 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:11.831 03:16:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:11.831 03:16:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:11.832 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.832 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:11.832 { 00:06:11.832 "filename": "/tmp/spdk_mem_dump.txt" 00:06:11.832 } 00:06:11.832 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.832 03:16:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:11.832 DPDK memory size 816.000000 MiB in 1 heap(s) 00:06:11.832 1 heaps totaling size 816.000000 MiB 00:06:11.832 size: 816.000000 MiB heap id: 0 00:06:11.832 end heaps---------- 00:06:11.832 9 mempools totaling size 595.772034 MiB 00:06:11.832 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:11.832 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:11.832 size: 92.545471 MiB name: bdev_io_58942 00:06:11.832 size: 50.003479 MiB name: msgpool_58942 00:06:11.832 size: 36.509338 MiB name: fsdev_io_58942 00:06:11.832 size: 21.763794 MiB name: PDU_Pool 00:06:11.832 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:11.832 size: 4.133484 MiB name: evtpool_58942 00:06:11.832 size: 0.026123 MiB name: Session_Pool 00:06:11.832 end mempools------- 00:06:11.832 6 memzones totaling size 4.142822 MiB 00:06:11.832 size: 1.000366 MiB name: RG_ring_0_58942 00:06:11.832 size: 1.000366 MiB name: RG_ring_1_58942 00:06:11.832 size: 1.000366 MiB name: RG_ring_4_58942 00:06:11.832 size: 1.000366 MiB name: RG_ring_5_58942 00:06:11.832 size: 0.125366 MiB name: RG_ring_2_58942 00:06:11.832 size: 0.015991 MiB name: RG_ring_3_58942 00:06:11.832 end memzones------- 00:06:12.092 03:16:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:12.092 heap id: 0 total size: 816.000000 MiB number of busy elements: 320 number of free elements: 18 00:06:12.092 list of free elements. size: 16.790161 MiB 00:06:12.092 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:12.092 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:12.092 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:12.092 element at address: 0x200018d00040 with size: 0.999939 MiB 00:06:12.092 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:12.092 element at address: 0x200019200000 with size: 0.999084 MiB 00:06:12.092 element at address: 0x200031e00000 with size: 0.994324 MiB 00:06:12.092 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:12.092 element at address: 0x200018a00000 with size: 0.959656 MiB 00:06:12.092 element at address: 0x200019500040 with size: 0.936401 MiB 00:06:12.092 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:12.092 element at address: 0x20001ac00000 with size: 0.560486 MiB 00:06:12.092 element at address: 0x200000c00000 with size: 0.490173 MiB 00:06:12.092 element at address: 0x200018e00000 with size: 0.487976 MiB 00:06:12.092 element at address: 0x200019600000 with size: 0.485413 MiB 00:06:12.092 element at address: 0x200012c00000 with size: 0.443481 MiB 00:06:12.092 element at address: 0x200028000000 with size: 0.390442 MiB 00:06:12.092 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:12.092 list of standard malloc elements. size: 199.288940 MiB 00:06:12.093 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:12.093 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:12.093 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:06:12.093 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:12.093 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:12.093 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:12.093 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:06:12.093 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:12.093 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:12.093 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:06:12.093 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:12.093 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012c71880 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012c71980 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012c72080 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012c72180 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:06:12.093 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:06:12.094 element at address: 0x200028063f40 with size: 0.000244 MiB 00:06:12.094 element at address: 0x200028064040 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806af80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806b080 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806b180 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806b280 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806b380 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806b480 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806b580 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806b680 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806b780 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806b880 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806b980 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806be80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806c080 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806c180 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806c280 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806c380 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806c480 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806c580 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806c680 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806c780 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806c880 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806c980 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806d080 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806d180 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806d280 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806d380 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806d480 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806d580 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806d680 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806d780 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806d880 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806d980 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806da80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806db80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806de80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806df80 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806e080 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806e180 with size: 0.000244 MiB 00:06:12.094 element at address: 0x20002806e280 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806e380 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806e480 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806e580 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806e680 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806e780 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806e880 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806e980 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806f080 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806f180 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806f280 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806f380 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806f480 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806f580 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806f680 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806f780 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806f880 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806f980 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:06:12.095 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:06:12.095 list of memzone associated elements. size: 599.920898 MiB 00:06:12.095 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:06:12.095 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:12.095 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:06:12.095 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:12.095 element at address: 0x200012df4740 with size: 92.045105 MiB 00:06:12.095 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58942_0 00:06:12.095 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:12.095 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58942_0 00:06:12.095 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:12.095 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58942_0 00:06:12.095 element at address: 0x2000197be900 with size: 20.255615 MiB 00:06:12.095 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:12.095 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:06:12.095 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:12.095 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:12.095 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58942_0 00:06:12.095 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:12.095 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58942 00:06:12.095 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:12.095 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58942 00:06:12.095 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:12.095 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:12.095 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:06:12.095 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:12.095 element at address: 0x200018afde00 with size: 1.008179 MiB 00:06:12.095 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:12.095 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:06:12.095 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:12.095 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:12.095 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58942 00:06:12.095 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:12.095 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58942 00:06:12.095 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:06:12.095 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58942 00:06:12.095 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:06:12.095 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58942 00:06:12.095 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:12.095 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58942 00:06:12.095 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:12.095 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58942 00:06:12.095 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:06:12.095 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:12.095 element at address: 0x200012c72280 with size: 0.500549 MiB 00:06:12.095 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:12.095 element at address: 0x20001967c440 with size: 0.250549 MiB 00:06:12.095 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:12.095 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:12.095 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58942 00:06:12.095 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:12.095 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58942 00:06:12.095 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:06:12.095 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:12.095 element at address: 0x200028064140 with size: 0.023804 MiB 00:06:12.095 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:12.095 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:12.095 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58942 00:06:12.095 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:06:12.095 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:12.095 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:12.095 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58942 00:06:12.095 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:12.095 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58942 00:06:12.095 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:12.095 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58942 00:06:12.095 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:06:12.095 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:12.095 03:16:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:12.095 03:16:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58942 00:06:12.095 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58942 ']' 00:06:12.095 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58942 00:06:12.095 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:12.095 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:12.095 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58942 00:06:12.095 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:12.095 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:12.095 killing process with pid 58942 00:06:12.095 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58942' 00:06:12.095 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58942 00:06:12.095 03:16:35 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58942 00:06:14.632 00:06:14.632 real 0m4.495s 00:06:14.632 user 0m4.240s 00:06:14.632 sys 0m0.786s 00:06:14.632 03:16:38 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:14.632 03:16:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.632 ************************************ 00:06:14.632 END TEST dpdk_mem_utility 00:06:14.632 ************************************ 00:06:14.632 03:16:38 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:14.632 03:16:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:14.632 03:16:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.632 03:16:38 -- common/autotest_common.sh@10 -- # set +x 00:06:14.632 ************************************ 00:06:14.632 START TEST event 00:06:14.632 ************************************ 00:06:14.632 03:16:38 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:14.892 * Looking for test storage... 00:06:14.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:14.892 03:16:38 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:14.892 03:16:38 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:14.892 03:16:38 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:14.892 03:16:38 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:14.892 03:16:38 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.892 03:16:38 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.892 03:16:38 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.892 03:16:38 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.892 03:16:38 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.892 03:16:38 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.892 03:16:38 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.892 03:16:38 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.892 03:16:38 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.892 03:16:38 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.892 03:16:38 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.892 03:16:38 event -- scripts/common.sh@344 -- # case "$op" in 00:06:14.892 03:16:38 event -- scripts/common.sh@345 -- # : 1 00:06:14.892 03:16:38 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.892 03:16:38 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.892 03:16:38 event -- scripts/common.sh@365 -- # decimal 1 00:06:14.892 03:16:38 event -- scripts/common.sh@353 -- # local d=1 00:06:14.892 03:16:38 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.892 03:16:38 event -- scripts/common.sh@355 -- # echo 1 00:06:14.892 03:16:38 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.892 03:16:38 event -- scripts/common.sh@366 -- # decimal 2 00:06:14.892 03:16:38 event -- scripts/common.sh@353 -- # local d=2 00:06:14.892 03:16:38 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.892 03:16:38 event -- scripts/common.sh@355 -- # echo 2 00:06:14.892 03:16:38 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.892 03:16:38 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.892 03:16:38 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.892 03:16:38 event -- scripts/common.sh@368 -- # return 0 00:06:14.892 03:16:38 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.892 03:16:38 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:14.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.892 --rc genhtml_branch_coverage=1 00:06:14.892 --rc genhtml_function_coverage=1 00:06:14.892 --rc genhtml_legend=1 00:06:14.892 --rc geninfo_all_blocks=1 00:06:14.892 --rc geninfo_unexecuted_blocks=1 00:06:14.892 00:06:14.892 ' 00:06:14.892 03:16:38 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:14.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.892 --rc genhtml_branch_coverage=1 00:06:14.892 --rc genhtml_function_coverage=1 00:06:14.892 --rc genhtml_legend=1 00:06:14.892 --rc geninfo_all_blocks=1 00:06:14.892 --rc geninfo_unexecuted_blocks=1 00:06:14.892 00:06:14.892 ' 00:06:14.892 03:16:38 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:14.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.892 --rc genhtml_branch_coverage=1 00:06:14.892 --rc genhtml_function_coverage=1 00:06:14.892 --rc genhtml_legend=1 00:06:14.892 --rc geninfo_all_blocks=1 00:06:14.892 --rc geninfo_unexecuted_blocks=1 00:06:14.892 00:06:14.892 ' 00:06:14.892 03:16:38 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:14.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.892 --rc genhtml_branch_coverage=1 00:06:14.892 --rc genhtml_function_coverage=1 00:06:14.892 --rc genhtml_legend=1 00:06:14.892 --rc geninfo_all_blocks=1 00:06:14.892 --rc geninfo_unexecuted_blocks=1 00:06:14.892 00:06:14.892 ' 00:06:14.892 03:16:38 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:14.892 03:16:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:14.892 03:16:38 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:14.892 03:16:38 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:14.892 03:16:38 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.892 03:16:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.892 ************************************ 00:06:14.892 START TEST event_perf 00:06:14.892 ************************************ 00:06:14.892 03:16:38 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:14.892 Running I/O for 1 seconds...[2024-11-05 03:16:38.467450] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:06:14.892 [2024-11-05 03:16:38.467576] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59056 ] 00:06:15.151 [2024-11-05 03:16:38.655117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.410 [2024-11-05 03:16:38.810594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.410 [2024-11-05 03:16:38.810785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.410 Running I/O for 1 seconds...[2024-11-05 03:16:38.810936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.410 [2024-11-05 03:16:38.810986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.789 00:06:16.789 lcore 0: 79508 00:06:16.789 lcore 1: 79512 00:06:16.789 lcore 2: 79502 00:06:16.789 lcore 3: 79505 00:06:16.789 done. 00:06:16.789 00:06:16.789 real 0m1.671s 00:06:16.789 user 0m4.395s 00:06:16.789 sys 0m0.151s 00:06:16.789 03:16:40 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:16.789 03:16:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.789 ************************************ 00:06:16.789 END TEST event_perf 00:06:16.789 ************************************ 00:06:16.789 03:16:40 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:16.789 03:16:40 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:16.789 03:16:40 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:16.789 03:16:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.789 ************************************ 00:06:16.789 START TEST event_reactor 00:06:16.789 ************************************ 00:06:16.789 03:16:40 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:16.789 [2024-11-05 03:16:40.207910] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:06:16.789 [2024-11-05 03:16:40.208584] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59095 ] 00:06:17.048 [2024-11-05 03:16:40.393495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.048 [2024-11-05 03:16:40.547252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.442 test_start 00:06:18.442 oneshot 00:06:18.442 tick 100 00:06:18.442 tick 100 00:06:18.442 tick 250 00:06:18.442 tick 100 00:06:18.442 tick 100 00:06:18.442 tick 100 00:06:18.442 tick 250 00:06:18.442 tick 500 00:06:18.442 tick 100 00:06:18.442 tick 100 00:06:18.442 tick 250 00:06:18.442 tick 100 00:06:18.442 tick 100 00:06:18.442 test_end 00:06:18.442 00:06:18.442 real 0m1.637s 00:06:18.442 user 0m1.425s 00:06:18.442 sys 0m0.103s 00:06:18.442 03:16:41 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.442 03:16:41 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:18.442 ************************************ 00:06:18.442 END TEST event_reactor 00:06:18.442 ************************************ 00:06:18.442 03:16:41 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:18.442 03:16:41 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:18.442 03:16:41 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.442 03:16:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.442 ************************************ 00:06:18.442 START TEST event_reactor_perf 00:06:18.442 ************************************ 00:06:18.442 03:16:41 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:18.442 [2024-11-05 03:16:41.916905] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:06:18.442 [2024-11-05 03:16:41.917048] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59132 ] 00:06:18.701 [2024-11-05 03:16:42.093918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.701 [2024-11-05 03:16:42.240831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.080 test_start 00:06:20.080 test_end 00:06:20.080 Performance: 373020 events per second 00:06:20.080 00:06:20.080 real 0m1.623s 00:06:20.080 user 0m1.407s 00:06:20.080 sys 0m0.108s 00:06:20.080 03:16:43 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.080 03:16:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.080 ************************************ 00:06:20.080 END TEST event_reactor_perf 00:06:20.080 ************************************ 00:06:20.080 03:16:43 event -- event/event.sh@49 -- # uname -s 00:06:20.080 03:16:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:20.080 03:16:43 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:20.080 03:16:43 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.080 03:16:43 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.080 03:16:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.080 ************************************ 00:06:20.080 START TEST event_scheduler 00:06:20.080 ************************************ 00:06:20.080 03:16:43 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:20.340 * Looking for test storage... 00:06:20.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.340 03:16:43 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:20.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.340 --rc genhtml_branch_coverage=1 00:06:20.340 --rc genhtml_function_coverage=1 00:06:20.340 --rc genhtml_legend=1 00:06:20.340 --rc geninfo_all_blocks=1 00:06:20.340 --rc geninfo_unexecuted_blocks=1 00:06:20.340 00:06:20.340 ' 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:20.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.340 --rc genhtml_branch_coverage=1 00:06:20.340 --rc genhtml_function_coverage=1 00:06:20.340 --rc genhtml_legend=1 00:06:20.340 --rc geninfo_all_blocks=1 00:06:20.340 --rc geninfo_unexecuted_blocks=1 00:06:20.340 00:06:20.340 ' 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:20.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.340 --rc genhtml_branch_coverage=1 00:06:20.340 --rc genhtml_function_coverage=1 00:06:20.340 --rc genhtml_legend=1 00:06:20.340 --rc geninfo_all_blocks=1 00:06:20.340 --rc geninfo_unexecuted_blocks=1 00:06:20.340 00:06:20.340 ' 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:20.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.340 --rc genhtml_branch_coverage=1 00:06:20.340 --rc genhtml_function_coverage=1 00:06:20.340 --rc genhtml_legend=1 00:06:20.340 --rc geninfo_all_blocks=1 00:06:20.340 --rc geninfo_unexecuted_blocks=1 00:06:20.340 00:06:20.340 ' 00:06:20.340 03:16:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:20.340 03:16:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59208 00:06:20.340 03:16:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:20.340 03:16:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.340 03:16:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59208 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 59208 ']' 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:20.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:20.340 03:16:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.340 [2024-11-05 03:16:43.920855] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:06:20.340 [2024-11-05 03:16:43.921021] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59208 ] 00:06:20.599 [2024-11-05 03:16:44.113599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.858 [2024-11-05 03:16:44.278881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.858 [2024-11-05 03:16:44.279172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.858 [2024-11-05 03:16:44.279722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.858 [2024-11-05 03:16:44.279767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.425 03:16:44 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.425 03:16:44 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:21.425 03:16:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:21.425 03:16:44 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.425 03:16:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.425 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.425 POWER: Cannot set governor of lcore 0 to userspace 00:06:21.425 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.425 POWER: Cannot set governor of lcore 0 to performance 00:06:21.425 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.425 POWER: Cannot set governor of lcore 0 to userspace 00:06:21.425 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.425 POWER: Cannot set governor of lcore 0 to userspace 00:06:21.425 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:21.425 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:21.425 POWER: Unable to set Power Management Environment for lcore 0 00:06:21.425 [2024-11-05 03:16:44.777116] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:21.425 [2024-11-05 03:16:44.777153] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:21.425 [2024-11-05 03:16:44.777169] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:21.425 [2024-11-05 03:16:44.777196] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:21.425 [2024-11-05 03:16:44.777210] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:21.425 [2024-11-05 03:16:44.777224] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:21.425 03:16:44 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.425 03:16:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:21.425 03:16:44 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.425 03:16:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.684 [2024-11-05 03:16:45.199765] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:21.684 03:16:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.684 03:16:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:21.684 03:16:45 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:21.684 03:16:45 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.684 03:16:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.684 ************************************ 00:06:21.684 START TEST scheduler_create_thread 00:06:21.684 ************************************ 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.684 2 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.684 3 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.684 4 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.684 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.943 5 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.943 6 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.943 7 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.943 8 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.943 9 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.943 10 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.943 03:16:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.318 03:16:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.318 03:16:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:23.318 03:16:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:23.318 03:16:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.318 03:16:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.255 03:16:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.255 03:16:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:24.255 03:16:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.255 03:16:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.824 03:16:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.824 03:16:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:24.824 03:16:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:24.824 03:16:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.824 03:16:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.762 03:16:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.762 ************************************ 00:06:25.762 END TEST scheduler_create_thread 00:06:25.762 ************************************ 00:06:25.762 00:06:25.762 real 0m3.885s 00:06:25.762 user 0m0.028s 00:06:25.762 sys 0m0.007s 00:06:25.762 03:16:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.762 03:16:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.762 03:16:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:25.762 03:16:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59208 00:06:25.762 03:16:49 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 59208 ']' 00:06:25.762 03:16:49 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 59208 00:06:25.762 03:16:49 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:25.762 03:16:49 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:25.762 03:16:49 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59208 00:06:25.762 killing process with pid 59208 00:06:25.762 03:16:49 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:25.762 03:16:49 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:25.762 03:16:49 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59208' 00:06:25.762 03:16:49 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 59208 00:06:25.762 03:16:49 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 59208 00:06:26.022 [2024-11-05 03:16:49.480411] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:27.399 00:06:27.399 real 0m7.176s 00:06:27.399 user 0m14.469s 00:06:27.399 sys 0m0.657s 00:06:27.399 03:16:50 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.399 ************************************ 00:06:27.399 END TEST event_scheduler 00:06:27.399 ************************************ 00:06:27.399 03:16:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.399 03:16:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:27.399 03:16:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:27.399 03:16:50 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:27.399 03:16:50 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.399 03:16:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.399 ************************************ 00:06:27.399 START TEST app_repeat 00:06:27.399 ************************************ 00:06:27.399 03:16:50 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:27.399 03:16:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.399 03:16:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.399 03:16:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:27.399 03:16:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.399 03:16:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:27.399 03:16:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:27.399 03:16:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:27.399 03:16:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59336 00:06:27.399 03:16:50 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:27.399 Process app_repeat pid: 59336 00:06:27.399 spdk_app_start Round 0 00:06:27.399 03:16:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.399 03:16:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59336' 00:06:27.399 03:16:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.399 03:16:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:27.399 03:16:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59336 /var/tmp/spdk-nbd.sock 00:06:27.399 03:16:50 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59336 ']' 00:06:27.399 03:16:50 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.399 03:16:50 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:27.400 03:16:50 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.400 03:16:50 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:27.400 03:16:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.400 [2024-11-05 03:16:50.907466] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:06:27.400 [2024-11-05 03:16:50.907601] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59336 ] 00:06:27.658 [2024-11-05 03:16:51.095037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.658 [2024-11-05 03:16:51.238953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.658 [2024-11-05 03:16:51.238994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.226 03:16:51 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:28.226 03:16:51 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:28.226 03:16:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.485 Malloc0 00:06:28.744 03:16:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.003 Malloc1 00:06:29.003 03:16:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.003 03:16:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.263 /dev/nbd0 00:06:29.263 03:16:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.263 03:16:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.263 03:16:52 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:29.263 03:16:52 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:29.263 03:16:52 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:29.263 03:16:52 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:29.263 03:16:52 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:29.263 03:16:52 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:29.263 03:16:52 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:29.263 03:16:52 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:29.263 03:16:52 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.263 1+0 records in 00:06:29.263 1+0 records out 00:06:29.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078789 s, 5.2 MB/s 00:06:29.263 03:16:52 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.263 03:16:52 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:29.263 03:16:52 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.263 03:16:52 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:29.263 03:16:52 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:29.263 03:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.263 03:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.263 03:16:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.522 /dev/nbd1 00:06:29.522 03:16:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.522 03:16:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.522 03:16:52 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:29.522 03:16:52 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:29.522 03:16:52 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:29.522 03:16:52 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:29.522 03:16:52 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:29.522 03:16:52 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:29.522 03:16:52 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:29.522 03:16:52 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:29.522 03:16:52 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.522 1+0 records in 00:06:29.522 1+0 records out 00:06:29.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417142 s, 9.8 MB/s 00:06:29.522 03:16:52 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.522 03:16:52 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:29.522 03:16:52 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.522 03:16:52 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:29.522 03:16:52 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:29.522 03:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.522 03:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.522 03:16:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.522 03:16:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.522 03:16:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.786 { 00:06:29.786 "nbd_device": "/dev/nbd0", 00:06:29.786 "bdev_name": "Malloc0" 00:06:29.786 }, 00:06:29.786 { 00:06:29.786 "nbd_device": "/dev/nbd1", 00:06:29.786 "bdev_name": "Malloc1" 00:06:29.786 } 00:06:29.786 ]' 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.786 { 00:06:29.786 "nbd_device": "/dev/nbd0", 00:06:29.786 "bdev_name": "Malloc0" 00:06:29.786 }, 00:06:29.786 { 00:06:29.786 "nbd_device": "/dev/nbd1", 00:06:29.786 "bdev_name": "Malloc1" 00:06:29.786 } 00:06:29.786 ]' 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.786 /dev/nbd1' 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.786 /dev/nbd1' 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.786 256+0 records in 00:06:29.786 256+0 records out 00:06:29.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0060256 s, 174 MB/s 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.786 256+0 records in 00:06:29.786 256+0 records out 00:06:29.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311222 s, 33.7 MB/s 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.786 256+0 records in 00:06:29.786 256+0 records out 00:06:29.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0404878 s, 25.9 MB/s 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.786 03:16:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.787 03:16:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.787 03:16:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.787 03:16:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.787 03:16:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.787 03:16:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.787 03:16:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.787 03:16:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.787 03:16:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.787 03:16:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.787 03:16:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.787 03:16:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.787 03:16:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.787 03:16:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.787 03:16:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.097 03:16:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.097 03:16:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.097 03:16:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.097 03:16:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.097 03:16:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.097 03:16:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.097 03:16:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.097 03:16:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.097 03:16:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.097 03:16:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.355 03:16:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.355 03:16:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.355 03:16:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.355 03:16:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.355 03:16:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.355 03:16:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.355 03:16:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.355 03:16:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.355 03:16:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.355 03:16:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.355 03:16:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.614 03:16:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.614 03:16:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.614 03:16:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.614 03:16:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.614 03:16:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.614 03:16:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.614 03:16:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:30.614 03:16:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.614 03:16:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.614 03:16:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.614 03:16:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.614 03:16:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.614 03:16:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.183 03:16:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.561 [2024-11-05 03:16:55.726075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.561 [2024-11-05 03:16:55.865973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.561 [2024-11-05 03:16:55.865974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.561 [2024-11-05 03:16:56.090659] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.561 [2024-11-05 03:16:56.090785] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.941 spdk_app_start Round 1 00:06:33.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.941 03:16:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:33.941 03:16:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:33.941 03:16:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59336 /var/tmp/spdk-nbd.sock 00:06:33.941 03:16:57 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59336 ']' 00:06:33.941 03:16:57 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.941 03:16:57 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:33.941 03:16:57 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.941 03:16:57 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:33.941 03:16:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.200 03:16:57 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.200 03:16:57 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:34.200 03:16:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.459 Malloc0 00:06:34.459 03:16:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.718 Malloc1 00:06:34.718 03:16:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.718 03:16:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.977 /dev/nbd0 00:06:34.977 03:16:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.977 03:16:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.977 03:16:58 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:34.977 03:16:58 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:34.977 03:16:58 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:34.977 03:16:58 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:34.977 03:16:58 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:34.977 03:16:58 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:34.977 03:16:58 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:34.977 03:16:58 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:34.977 03:16:58 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.977 1+0 records in 00:06:34.977 1+0 records out 00:06:34.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379789 s, 10.8 MB/s 00:06:34.977 03:16:58 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.977 03:16:58 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:34.977 03:16:58 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.977 03:16:58 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:34.977 03:16:58 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:34.977 03:16:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.977 03:16:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.977 03:16:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:35.236 /dev/nbd1 00:06:35.236 03:16:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:35.236 03:16:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:35.236 03:16:58 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:35.236 03:16:58 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:35.236 03:16:58 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:35.236 03:16:58 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:35.236 03:16:58 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:35.236 03:16:58 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:35.236 03:16:58 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:35.236 03:16:58 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:35.236 03:16:58 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.236 1+0 records in 00:06:35.236 1+0 records out 00:06:35.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346256 s, 11.8 MB/s 00:06:35.236 03:16:58 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.236 03:16:58 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:35.236 03:16:58 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.236 03:16:58 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:35.236 03:16:58 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:35.237 03:16:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.237 03:16:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.237 03:16:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.237 03:16:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.237 03:16:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.496 03:16:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:35.496 { 00:06:35.496 "nbd_device": "/dev/nbd0", 00:06:35.496 "bdev_name": "Malloc0" 00:06:35.496 }, 00:06:35.496 { 00:06:35.496 "nbd_device": "/dev/nbd1", 00:06:35.496 "bdev_name": "Malloc1" 00:06:35.496 } 00:06:35.496 ]' 00:06:35.496 03:16:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:35.496 { 00:06:35.496 "nbd_device": "/dev/nbd0", 00:06:35.496 "bdev_name": "Malloc0" 00:06:35.496 }, 00:06:35.496 { 00:06:35.496 "nbd_device": "/dev/nbd1", 00:06:35.496 "bdev_name": "Malloc1" 00:06:35.496 } 00:06:35.496 ]' 00:06:35.496 03:16:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.496 03:16:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:35.496 /dev/nbd1' 00:06:35.496 03:16:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:35.496 /dev/nbd1' 00:06:35.496 03:16:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.496 03:16:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:35.496 03:16:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:35.496 03:16:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:35.496 03:16:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:35.496 03:16:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:35.496 03:16:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.496 03:16:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.496 03:16:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:35.496 03:16:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.496 03:16:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:35.496 03:16:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:35.756 256+0 records in 00:06:35.756 256+0 records out 00:06:35.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147698 s, 71.0 MB/s 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:35.756 256+0 records in 00:06:35.756 256+0 records out 00:06:35.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293171 s, 35.8 MB/s 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:35.756 256+0 records in 00:06:35.756 256+0 records out 00:06:35.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316474 s, 33.1 MB/s 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.756 03:16:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.015 03:16:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.015 03:16:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.015 03:16:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.015 03:16:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.015 03:16:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.015 03:16:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.015 03:16:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.015 03:16:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.015 03:16:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.015 03:16:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.274 03:16:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.274 03:16:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.274 03:16:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.274 03:16:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.274 03:16:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.274 03:16:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.274 03:16:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.274 03:16:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.274 03:16:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.274 03:16:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.274 03:16:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.274 03:16:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.274 03:16:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.274 03:16:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.532 03:16:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.532 03:16:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.532 03:16:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.532 03:16:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:36.532 03:16:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.532 03:16:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.532 03:16:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:36.532 03:16:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:36.532 03:16:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:36.532 03:16:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:36.791 03:17:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:38.173 [2024-11-05 03:17:01.530889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.173 [2024-11-05 03:17:01.661869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.173 [2024-11-05 03:17:01.661892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.440 [2024-11-05 03:17:01.883480] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.440 [2024-11-05 03:17:01.883584] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:39.818 spdk_app_start Round 2 00:06:39.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.818 03:17:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:39.818 03:17:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:39.818 03:17:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59336 /var/tmp/spdk-nbd.sock 00:06:39.818 03:17:03 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59336 ']' 00:06:39.818 03:17:03 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.818 03:17:03 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:39.818 03:17:03 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.818 03:17:03 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:39.818 03:17:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.076 03:17:03 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:40.076 03:17:03 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:40.076 03:17:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.335 Malloc0 00:06:40.335 03:17:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.594 Malloc1 00:06:40.853 03:17:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.853 /dev/nbd0 00:06:40.853 03:17:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.113 03:17:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.113 03:17:04 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:41.113 03:17:04 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:41.113 03:17:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:41.113 03:17:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:41.113 03:17:04 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:41.113 03:17:04 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:41.113 03:17:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:41.113 03:17:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:41.113 03:17:04 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.113 1+0 records in 00:06:41.113 1+0 records out 00:06:41.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415257 s, 9.9 MB/s 00:06:41.113 03:17:04 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.113 03:17:04 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:41.113 03:17:04 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.113 03:17:04 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:41.113 03:17:04 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:41.113 03:17:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.113 03:17:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.113 03:17:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.113 /dev/nbd1 00:06:41.373 03:17:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.373 03:17:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.373 03:17:04 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:41.373 03:17:04 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:41.373 03:17:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:41.373 03:17:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:41.373 03:17:04 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:41.373 03:17:04 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:41.373 03:17:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:41.373 03:17:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:41.373 03:17:04 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.373 1+0 records in 00:06:41.373 1+0 records out 00:06:41.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400471 s, 10.2 MB/s 00:06:41.373 03:17:04 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.373 03:17:04 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:41.373 03:17:04 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.373 03:17:04 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:41.373 03:17:04 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:41.373 03:17:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.373 03:17:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.373 03:17:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.373 03:17:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.373 03:17:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.373 03:17:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.373 { 00:06:41.373 "nbd_device": "/dev/nbd0", 00:06:41.373 "bdev_name": "Malloc0" 00:06:41.373 }, 00:06:41.373 { 00:06:41.373 "nbd_device": "/dev/nbd1", 00:06:41.373 "bdev_name": "Malloc1" 00:06:41.373 } 00:06:41.373 ]' 00:06:41.373 03:17:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.373 { 00:06:41.373 "nbd_device": "/dev/nbd0", 00:06:41.373 "bdev_name": "Malloc0" 00:06:41.373 }, 00:06:41.373 { 00:06:41.373 "nbd_device": "/dev/nbd1", 00:06:41.373 "bdev_name": "Malloc1" 00:06:41.373 } 00:06:41.373 ]' 00:06:41.373 03:17:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.632 03:17:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.632 /dev/nbd1' 00:06:41.633 03:17:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.633 03:17:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.633 /dev/nbd1' 00:06:41.633 03:17:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.633 03:17:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.633 03:17:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.633 03:17:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.633 03:17:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.633 03:17:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.633 03:17:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.633 03:17:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.633 03:17:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.633 03:17:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.633 03:17:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.633 256+0 records in 00:06:41.633 256+0 records out 00:06:41.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618034 s, 170 MB/s 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.633 256+0 records in 00:06:41.633 256+0 records out 00:06:41.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329729 s, 31.8 MB/s 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.633 256+0 records in 00:06:41.633 256+0 records out 00:06:41.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.036643 s, 28.6 MB/s 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.633 03:17:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.892 03:17:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.892 03:17:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.892 03:17:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.892 03:17:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.892 03:17:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.892 03:17:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.892 03:17:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.892 03:17:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.892 03:17:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.892 03:17:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.153 03:17:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.153 03:17:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.153 03:17:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.153 03:17:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.153 03:17:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.153 03:17:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.153 03:17:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.153 03:17:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.153 03:17:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.153 03:17:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.153 03:17:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.419 03:17:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.419 03:17:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.419 03:17:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.419 03:17:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.419 03:17:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.419 03:17:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.419 03:17:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.419 03:17:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.419 03:17:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.419 03:17:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.419 03:17:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.419 03:17:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.419 03:17:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.987 03:17:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:44.364 [2024-11-05 03:17:07.540706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.364 [2024-11-05 03:17:07.673175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.364 [2024-11-05 03:17:07.673175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.364 [2024-11-05 03:17:07.898545] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:44.364 [2024-11-05 03:17:07.898663] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.773 03:17:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59336 /var/tmp/spdk-nbd.sock 00:06:45.773 03:17:09 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59336 ']' 00:06:45.773 03:17:09 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.773 03:17:09 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:45.773 03:17:09 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.773 03:17:09 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:45.773 03:17:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.032 03:17:09 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:46.032 03:17:09 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:46.032 03:17:09 event.app_repeat -- event/event.sh@39 -- # killprocess 59336 00:06:46.032 03:17:09 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 59336 ']' 00:06:46.032 03:17:09 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 59336 00:06:46.032 03:17:09 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:46.032 03:17:09 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:46.032 03:17:09 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59336 00:06:46.290 killing process with pid 59336 00:06:46.290 03:17:09 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:46.290 03:17:09 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:46.290 03:17:09 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59336' 00:06:46.290 03:17:09 event.app_repeat -- common/autotest_common.sh@971 -- # kill 59336 00:06:46.290 03:17:09 event.app_repeat -- common/autotest_common.sh@976 -- # wait 59336 00:06:47.227 spdk_app_start is called in Round 0. 00:06:47.227 Shutdown signal received, stop current app iteration 00:06:47.227 Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 reinitialization... 00:06:47.227 spdk_app_start is called in Round 1. 00:06:47.227 Shutdown signal received, stop current app iteration 00:06:47.227 Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 reinitialization... 00:06:47.227 spdk_app_start is called in Round 2. 00:06:47.227 Shutdown signal received, stop current app iteration 00:06:47.227 Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 reinitialization... 00:06:47.227 spdk_app_start is called in Round 3. 00:06:47.227 Shutdown signal received, stop current app iteration 00:06:47.227 03:17:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:47.227 03:17:10 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:47.227 00:06:47.227 real 0m19.921s 00:06:47.227 user 0m41.923s 00:06:47.227 sys 0m3.544s 00:06:47.227 03:17:10 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.227 ************************************ 00:06:47.227 END TEST app_repeat 00:06:47.227 ************************************ 00:06:47.227 03:17:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.485 03:17:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:47.486 03:17:10 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:47.486 03:17:10 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.486 03:17:10 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.486 03:17:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.486 ************************************ 00:06:47.486 START TEST cpu_locks 00:06:47.486 ************************************ 00:06:47.486 03:17:10 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:47.486 * Looking for test storage... 00:06:47.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:47.486 03:17:10 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:47.486 03:17:10 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:47.486 03:17:10 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:47.486 03:17:11 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.486 03:17:11 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:47.486 03:17:11 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.486 03:17:11 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:47.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.486 --rc genhtml_branch_coverage=1 00:06:47.486 --rc genhtml_function_coverage=1 00:06:47.486 --rc genhtml_legend=1 00:06:47.486 --rc geninfo_all_blocks=1 00:06:47.486 --rc geninfo_unexecuted_blocks=1 00:06:47.486 00:06:47.486 ' 00:06:47.486 03:17:11 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:47.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.486 --rc genhtml_branch_coverage=1 00:06:47.486 --rc genhtml_function_coverage=1 00:06:47.486 --rc genhtml_legend=1 00:06:47.486 --rc geninfo_all_blocks=1 00:06:47.486 --rc geninfo_unexecuted_blocks=1 00:06:47.486 00:06:47.486 ' 00:06:47.486 03:17:11 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:47.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.486 --rc genhtml_branch_coverage=1 00:06:47.486 --rc genhtml_function_coverage=1 00:06:47.486 --rc genhtml_legend=1 00:06:47.486 --rc geninfo_all_blocks=1 00:06:47.486 --rc geninfo_unexecuted_blocks=1 00:06:47.486 00:06:47.486 ' 00:06:47.486 03:17:11 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:47.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.486 --rc genhtml_branch_coverage=1 00:06:47.486 --rc genhtml_function_coverage=1 00:06:47.486 --rc genhtml_legend=1 00:06:47.486 --rc geninfo_all_blocks=1 00:06:47.486 --rc geninfo_unexecuted_blocks=1 00:06:47.486 00:06:47.486 ' 00:06:47.486 03:17:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:47.486 03:17:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:47.486 03:17:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:47.486 03:17:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:47.486 03:17:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.486 03:17:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.486 03:17:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.745 ************************************ 00:06:47.745 START TEST default_locks 00:06:47.745 ************************************ 00:06:47.745 03:17:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:47.745 03:17:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59785 00:06:47.745 03:17:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.745 03:17:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59785 00:06:47.745 03:17:11 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59785 ']' 00:06:47.745 03:17:11 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.745 03:17:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:47.745 03:17:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.745 03:17:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:47.745 03:17:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.745 [2024-11-05 03:17:11.195786] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:06:47.745 [2024-11-05 03:17:11.195922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59785 ] 00:06:48.005 [2024-11-05 03:17:11.379593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.005 [2024-11-05 03:17:11.518405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.383 03:17:12 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:49.383 03:17:12 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:49.383 03:17:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59785 00:06:49.383 03:17:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.383 03:17:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59785 00:06:49.640 03:17:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59785 00:06:49.640 03:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 59785 ']' 00:06:49.640 03:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 59785 00:06:49.640 03:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:49.640 03:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:49.640 03:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59785 00:06:49.640 killing process with pid 59785 00:06:49.640 03:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:49.640 03:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:49.640 03:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59785' 00:06:49.640 03:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 59785 00:06:49.640 03:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 59785 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59785 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59785 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:52.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.170 ERROR: process (pid: 59785) is no longer running 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59785 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59785 ']' 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.170 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59785) - No such process 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:52.170 00:06:52.170 real 0m4.596s 00:06:52.170 user 0m4.384s 00:06:52.170 sys 0m0.874s 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.170 03:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.170 ************************************ 00:06:52.170 END TEST default_locks 00:06:52.170 ************************************ 00:06:52.170 03:17:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:52.170 03:17:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:52.170 03:17:15 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.170 03:17:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.428 ************************************ 00:06:52.428 START TEST default_locks_via_rpc 00:06:52.428 ************************************ 00:06:52.428 03:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:52.428 03:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59867 00:06:52.428 03:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59867 00:06:52.428 03:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.428 03:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59867 ']' 00:06:52.428 03:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.428 03:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.428 03:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.428 03:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.428 03:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.428 [2024-11-05 03:17:15.873686] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:06:52.428 [2024-11-05 03:17:15.873832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59867 ] 00:06:52.686 [2024-11-05 03:17:16.060072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.686 [2024-11-05 03:17:16.204050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59867 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59867 00:06:54.061 03:17:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.319 03:17:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59867 00:06:54.319 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59867 ']' 00:06:54.319 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59867 00:06:54.319 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:54.319 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:54.319 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59867 00:06:54.319 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:54.319 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:54.319 killing process with pid 59867 00:06:54.319 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59867' 00:06:54.319 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59867 00:06:54.319 03:17:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59867 00:06:56.865 00:06:56.865 real 0m4.607s 00:06:56.865 user 0m4.416s 00:06:56.865 sys 0m0.860s 00:06:56.865 03:17:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.865 03:17:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.865 ************************************ 00:06:56.865 END TEST default_locks_via_rpc 00:06:56.865 ************************************ 00:06:56.865 03:17:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:56.865 03:17:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:56.865 03:17:20 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.865 03:17:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.865 ************************************ 00:06:56.865 START TEST non_locking_app_on_locked_coremask 00:06:56.865 ************************************ 00:06:56.865 03:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:56.865 03:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59950 00:06:56.865 03:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.865 03:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59950 /var/tmp/spdk.sock 00:06:56.865 03:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59950 ']' 00:06:56.865 03:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.865 03:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:56.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.865 03:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.865 03:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:56.865 03:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.124 [2024-11-05 03:17:20.559073] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:06:57.124 [2024-11-05 03:17:20.559223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59950 ] 00:06:57.383 [2024-11-05 03:17:20.746402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.383 [2024-11-05 03:17:20.893330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.760 03:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:58.760 03:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:58.760 03:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59972 00:06:58.760 03:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59972 /var/tmp/spdk2.sock 00:06:58.760 03:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:58.760 03:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59972 ']' 00:06:58.760 03:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.760 03:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:58.760 03:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.761 03:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:58.761 03:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.761 [2024-11-05 03:17:22.040251] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:06:58.761 [2024-11-05 03:17:22.040405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59972 ] 00:06:58.761 [2024-11-05 03:17:22.231658] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.761 [2024-11-05 03:17:22.231722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.020 [2024-11-05 03:17:22.520100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.554 03:17:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:01.554 03:17:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:01.554 03:17:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59950 00:07:01.554 03:17:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59950 00:07:01.554 03:17:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.813 03:17:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59950 00:07:01.813 03:17:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59950 ']' 00:07:01.813 03:17:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59950 00:07:01.813 03:17:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:01.813 03:17:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:01.813 03:17:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59950 00:07:01.813 03:17:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:01.813 03:17:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:01.813 03:17:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59950' 00:07:01.813 killing process with pid 59950 00:07:01.813 03:17:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59950 00:07:01.813 03:17:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59950 00:07:08.380 03:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59972 00:07:08.380 03:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59972 ']' 00:07:08.380 03:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59972 00:07:08.380 03:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:08.380 03:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:08.380 03:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59972 00:07:08.380 03:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:08.380 03:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:08.380 killing process with pid 59972 00:07:08.380 03:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59972' 00:07:08.380 03:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59972 00:07:08.380 03:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59972 00:07:10.285 00:07:10.285 real 0m12.907s 00:07:10.285 user 0m12.964s 00:07:10.285 sys 0m1.709s 00:07:10.285 03:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:10.285 ************************************ 00:07:10.285 END TEST non_locking_app_on_locked_coremask 00:07:10.285 ************************************ 00:07:10.285 03:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.285 03:17:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:10.285 03:17:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:10.285 03:17:33 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.285 03:17:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.285 ************************************ 00:07:10.285 START TEST locking_app_on_unlocked_coremask 00:07:10.285 ************************************ 00:07:10.285 03:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:07:10.285 03:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60132 00:07:10.285 03:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60132 /var/tmp/spdk.sock 00:07:10.285 03:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:10.285 03:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60132 ']' 00:07:10.285 03:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.285 03:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:10.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.285 03:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.285 03:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:10.285 03:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.285 [2024-11-05 03:17:33.535633] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:07:10.285 [2024-11-05 03:17:33.535779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60132 ] 00:07:10.285 [2024-11-05 03:17:33.722179] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.285 [2024-11-05 03:17:33.722245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.285 [2024-11-05 03:17:33.868940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.664 03:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:11.664 03:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:11.664 03:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60148 00:07:11.664 03:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60148 /var/tmp/spdk2.sock 00:07:11.664 03:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:11.664 03:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60148 ']' 00:07:11.664 03:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.664 03:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:11.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.664 03:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.664 03:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:11.664 03:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.664 [2024-11-05 03:17:35.005655] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:07:11.664 [2024-11-05 03:17:35.005807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60148 ] 00:07:11.664 [2024-11-05 03:17:35.194097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.924 [2024-11-05 03:17:35.474124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.461 03:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:14.461 03:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:14.461 03:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60148 00:07:14.461 03:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60148 00:07:14.461 03:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.029 03:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60132 00:07:15.029 03:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60132 ']' 00:07:15.029 03:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60132 00:07:15.029 03:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:15.029 03:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:15.029 03:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60132 00:07:15.029 03:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:15.029 03:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:15.029 killing process with pid 60132 00:07:15.029 03:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60132' 00:07:15.029 03:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60132 00:07:15.029 03:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60132 00:07:20.314 03:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60148 00:07:20.314 03:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60148 ']' 00:07:20.314 03:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60148 00:07:20.314 03:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:20.314 03:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:20.314 03:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60148 00:07:20.314 03:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:20.314 03:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:20.314 03:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60148' 00:07:20.314 killing process with pid 60148 00:07:20.314 03:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60148 00:07:20.314 03:17:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60148 00:07:23.601 00:07:23.601 real 0m13.071s 00:07:23.601 user 0m13.135s 00:07:23.601 sys 0m1.783s 00:07:23.601 03:17:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:23.601 03:17:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.601 ************************************ 00:07:23.601 END TEST locking_app_on_unlocked_coremask 00:07:23.601 ************************************ 00:07:23.601 03:17:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:23.601 03:17:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:23.601 03:17:46 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:23.601 03:17:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.601 ************************************ 00:07:23.601 START TEST locking_app_on_locked_coremask 00:07:23.601 ************************************ 00:07:23.601 03:17:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:07:23.601 03:17:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60313 00:07:23.601 03:17:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60313 /var/tmp/spdk.sock 00:07:23.601 03:17:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:23.601 03:17:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60313 ']' 00:07:23.601 03:17:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.601 03:17:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:23.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.601 03:17:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.601 03:17:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:23.601 03:17:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.601 [2024-11-05 03:17:46.680658] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:07:23.601 [2024-11-05 03:17:46.680791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60313 ] 00:07:23.601 [2024-11-05 03:17:46.865703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.601 [2024-11-05 03:17:47.006700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60334 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60334 /var/tmp/spdk2.sock 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60334 /var/tmp/spdk2.sock 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60334 /var/tmp/spdk2.sock 00:07:24.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60334 ']' 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:24.540 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.799 [2024-11-05 03:17:48.189005] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:07:24.799 [2024-11-05 03:17:48.189713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60334 ] 00:07:24.799 [2024-11-05 03:17:48.382257] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60313 has claimed it. 00:07:24.799 [2024-11-05 03:17:48.382352] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:25.374 ERROR: process (pid: 60334) is no longer running 00:07:25.374 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60334) - No such process 00:07:25.374 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:25.374 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:25.374 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:25.374 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.374 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:25.374 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.374 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60313 00:07:25.374 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60313 00:07:25.374 03:17:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.942 03:17:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60313 00:07:25.942 03:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60313 ']' 00:07:25.942 03:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60313 00:07:25.942 03:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:25.942 03:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:25.942 03:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60313 00:07:25.942 killing process with pid 60313 00:07:25.942 03:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:25.942 03:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:25.942 03:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60313' 00:07:25.942 03:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60313 00:07:25.942 03:17:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60313 00:07:28.483 00:07:28.483 real 0m5.401s 00:07:28.483 user 0m5.412s 00:07:28.483 sys 0m1.077s 00:07:28.483 ************************************ 00:07:28.483 END TEST locking_app_on_locked_coremask 00:07:28.483 ************************************ 00:07:28.483 03:17:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:28.483 03:17:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.483 03:17:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:28.483 03:17:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:28.483 03:17:52 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:28.483 03:17:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.483 ************************************ 00:07:28.483 START TEST locking_overlapped_coremask 00:07:28.483 ************************************ 00:07:28.483 03:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:07:28.483 03:17:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60404 00:07:28.483 03:17:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:28.483 03:17:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60404 /var/tmp/spdk.sock 00:07:28.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.483 03:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60404 ']' 00:07:28.483 03:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.483 03:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:28.483 03:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.483 03:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:28.483 03:17:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.742 [2024-11-05 03:17:52.162945] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:07:28.742 [2024-11-05 03:17:52.163069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60404 ] 00:07:29.000 [2024-11-05 03:17:52.347659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.000 [2024-11-05 03:17:52.498654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.000 [2024-11-05 03:17:52.498793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.000 [2024-11-05 03:17:52.498846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60427 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60427 /var/tmp/spdk2.sock 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60427 /var/tmp/spdk2.sock 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60427 /var/tmp/spdk2.sock 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60427 ']' 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:29.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:29.936 03:17:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.195 [2024-11-05 03:17:53.648949] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:07:30.195 [2024-11-05 03:17:53.649113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60427 ] 00:07:30.454 [2024-11-05 03:17:53.840730] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60404 has claimed it. 00:07:30.454 [2024-11-05 03:17:53.844318] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:31.021 ERROR: process (pid: 60427) is no longer running 00:07:31.021 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60427) - No such process 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60404 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 60404 ']' 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 60404 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60404 00:07:31.021 killing process with pid 60404 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60404' 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 60404 00:07:31.021 03:17:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 60404 00:07:33.553 00:07:33.553 real 0m4.959s 00:07:33.553 user 0m13.329s 00:07:33.553 sys 0m0.850s 00:07:33.553 03:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.553 03:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.553 ************************************ 00:07:33.553 END TEST locking_overlapped_coremask 00:07:33.553 ************************************ 00:07:33.553 03:17:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:33.553 03:17:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:33.553 03:17:57 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.553 03:17:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:33.553 ************************************ 00:07:33.553 START TEST locking_overlapped_coremask_via_rpc 00:07:33.553 ************************************ 00:07:33.553 03:17:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:07:33.553 03:17:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60497 00:07:33.553 03:17:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:33.553 03:17:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60497 /var/tmp/spdk.sock 00:07:33.553 03:17:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60497 ']' 00:07:33.553 03:17:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.553 03:17:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:33.553 03:17:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.553 03:17:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:33.553 03:17:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.813 [2024-11-05 03:17:57.193844] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:07:33.813 [2024-11-05 03:17:57.193982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60497 ] 00:07:33.813 [2024-11-05 03:17:57.376997] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:33.813 [2024-11-05 03:17:57.377081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.072 [2024-11-05 03:17:57.529111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.072 [2024-11-05 03:17:57.529222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.072 [2024-11-05 03:17:57.529233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.007 03:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:35.007 03:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:35.007 03:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60519 00:07:35.007 03:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60519 /var/tmp/spdk2.sock 00:07:35.007 03:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:35.007 03:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60519 ']' 00:07:35.007 03:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.007 03:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:35.007 03:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.007 03:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:35.007 03:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.267 [2024-11-05 03:17:58.666034] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:07:35.267 [2024-11-05 03:17:58.666200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60519 ] 00:07:35.267 [2024-11-05 03:17:58.850120] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:35.526 [2024-11-05 03:17:58.854299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.526 [2024-11-05 03:17:59.094274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.526 [2024-11-05 03:17:59.097414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.526 [2024-11-05 03:17:59.097443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.064 [2024-11-05 03:18:01.226539] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60497 has claimed it. 00:07:38.064 request: 00:07:38.064 { 00:07:38.064 "method": "framework_enable_cpumask_locks", 00:07:38.064 "req_id": 1 00:07:38.064 } 00:07:38.064 Got JSON-RPC error response 00:07:38.064 response: 00:07:38.064 { 00:07:38.064 "code": -32603, 00:07:38.064 "message": "Failed to claim CPU core: 2" 00:07:38.064 } 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60497 /var/tmp/spdk.sock 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60497 ']' 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:38.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60519 /var/tmp/spdk2.sock 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60519 ']' 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:38.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:38.064 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.324 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:38.324 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:38.324 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:38.324 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:38.324 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:38.324 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:38.324 00:07:38.324 real 0m4.600s 00:07:38.324 user 0m1.265s 00:07:38.324 sys 0m0.233s 00:07:38.324 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.324 03:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.324 ************************************ 00:07:38.324 END TEST locking_overlapped_coremask_via_rpc 00:07:38.324 ************************************ 00:07:38.324 03:18:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:38.324 03:18:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60497 ]] 00:07:38.324 03:18:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60497 00:07:38.324 03:18:01 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60497 ']' 00:07:38.324 03:18:01 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60497 00:07:38.324 03:18:01 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:38.324 03:18:01 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:38.324 03:18:01 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60497 00:07:38.324 03:18:01 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:38.324 03:18:01 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:38.324 killing process with pid 60497 00:07:38.324 03:18:01 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60497' 00:07:38.324 03:18:01 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60497 00:07:38.324 03:18:01 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60497 00:07:41.650 03:18:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60519 ]] 00:07:41.650 03:18:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60519 00:07:41.650 03:18:04 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60519 ']' 00:07:41.650 03:18:04 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60519 00:07:41.650 03:18:04 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:41.650 03:18:04 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:41.650 03:18:04 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60519 00:07:41.650 03:18:04 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:41.650 03:18:04 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:41.650 killing process with pid 60519 00:07:41.650 03:18:04 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60519' 00:07:41.650 03:18:04 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60519 00:07:41.650 03:18:04 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60519 00:07:43.563 03:18:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:43.563 03:18:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:43.563 03:18:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60497 ]] 00:07:43.563 03:18:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60497 00:07:43.563 03:18:06 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60497 ']' 00:07:43.563 03:18:06 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60497 00:07:43.563 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60497) - No such process 00:07:43.563 Process with pid 60497 is not found 00:07:43.563 Process with pid 60519 is not found 00:07:43.563 03:18:06 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60497 is not found' 00:07:43.563 03:18:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60519 ]] 00:07:43.563 03:18:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60519 00:07:43.563 03:18:06 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60519 ']' 00:07:43.563 03:18:06 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60519 00:07:43.563 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60519) - No such process 00:07:43.563 03:18:06 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60519 is not found' 00:07:43.563 03:18:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:43.563 00:07:43.563 real 0m56.118s 00:07:43.563 user 1m32.633s 00:07:43.563 sys 0m8.840s 00:07:43.563 03:18:06 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:43.563 03:18:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.563 ************************************ 00:07:43.563 END TEST cpu_locks 00:07:43.563 ************************************ 00:07:43.563 ************************************ 00:07:43.563 END TEST event 00:07:43.563 ************************************ 00:07:43.563 00:07:43.563 real 1m28.805s 00:07:43.563 user 2m36.495s 00:07:43.563 sys 0m13.809s 00:07:43.563 03:18:07 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:43.563 03:18:07 event -- common/autotest_common.sh@10 -- # set +x 00:07:43.563 03:18:07 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:43.563 03:18:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:43.563 03:18:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:43.563 03:18:07 -- common/autotest_common.sh@10 -- # set +x 00:07:43.563 ************************************ 00:07:43.563 START TEST thread 00:07:43.563 ************************************ 00:07:43.564 03:18:07 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:43.822 * Looking for test storage... 00:07:43.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:43.822 03:18:07 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:43.822 03:18:07 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:43.822 03:18:07 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:43.822 03:18:07 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:43.822 03:18:07 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.822 03:18:07 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.822 03:18:07 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.822 03:18:07 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.822 03:18:07 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.822 03:18:07 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.822 03:18:07 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.822 03:18:07 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.822 03:18:07 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.822 03:18:07 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.822 03:18:07 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.822 03:18:07 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:43.822 03:18:07 thread -- scripts/common.sh@345 -- # : 1 00:07:43.822 03:18:07 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.822 03:18:07 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.822 03:18:07 thread -- scripts/common.sh@365 -- # decimal 1 00:07:43.822 03:18:07 thread -- scripts/common.sh@353 -- # local d=1 00:07:43.822 03:18:07 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.822 03:18:07 thread -- scripts/common.sh@355 -- # echo 1 00:07:43.822 03:18:07 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.822 03:18:07 thread -- scripts/common.sh@366 -- # decimal 2 00:07:43.822 03:18:07 thread -- scripts/common.sh@353 -- # local d=2 00:07:43.822 03:18:07 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.822 03:18:07 thread -- scripts/common.sh@355 -- # echo 2 00:07:43.822 03:18:07 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.822 03:18:07 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.822 03:18:07 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.822 03:18:07 thread -- scripts/common.sh@368 -- # return 0 00:07:43.822 03:18:07 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.822 03:18:07 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:43.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.822 --rc genhtml_branch_coverage=1 00:07:43.822 --rc genhtml_function_coverage=1 00:07:43.822 --rc genhtml_legend=1 00:07:43.822 --rc geninfo_all_blocks=1 00:07:43.822 --rc geninfo_unexecuted_blocks=1 00:07:43.822 00:07:43.822 ' 00:07:43.822 03:18:07 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:43.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.822 --rc genhtml_branch_coverage=1 00:07:43.822 --rc genhtml_function_coverage=1 00:07:43.822 --rc genhtml_legend=1 00:07:43.822 --rc geninfo_all_blocks=1 00:07:43.822 --rc geninfo_unexecuted_blocks=1 00:07:43.822 00:07:43.822 ' 00:07:43.822 03:18:07 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:43.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.822 --rc genhtml_branch_coverage=1 00:07:43.822 --rc genhtml_function_coverage=1 00:07:43.822 --rc genhtml_legend=1 00:07:43.822 --rc geninfo_all_blocks=1 00:07:43.822 --rc geninfo_unexecuted_blocks=1 00:07:43.822 00:07:43.822 ' 00:07:43.822 03:18:07 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:43.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.822 --rc genhtml_branch_coverage=1 00:07:43.822 --rc genhtml_function_coverage=1 00:07:43.822 --rc genhtml_legend=1 00:07:43.822 --rc geninfo_all_blocks=1 00:07:43.822 --rc geninfo_unexecuted_blocks=1 00:07:43.822 00:07:43.822 ' 00:07:43.822 03:18:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:43.822 03:18:07 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:43.822 03:18:07 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:43.822 03:18:07 thread -- common/autotest_common.sh@10 -- # set +x 00:07:43.822 ************************************ 00:07:43.822 START TEST thread_poller_perf 00:07:43.822 ************************************ 00:07:43.822 03:18:07 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:43.822 [2024-11-05 03:18:07.371207] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:07:43.822 [2024-11-05 03:18:07.371485] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60721 ] 00:07:44.080 [2024-11-05 03:18:07.554456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.339 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:44.339 [2024-11-05 03:18:07.677654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.717 [2024-11-05T03:18:09.301Z] ====================================== 00:07:45.717 [2024-11-05T03:18:09.301Z] busy:2502969444 (cyc) 00:07:45.717 [2024-11-05T03:18:09.301Z] total_run_count: 389000 00:07:45.717 [2024-11-05T03:18:09.301Z] tsc_hz: 2490000000 (cyc) 00:07:45.717 [2024-11-05T03:18:09.301Z] ====================================== 00:07:45.717 [2024-11-05T03:18:09.301Z] poller_cost: 6434 (cyc), 2583 (nsec) 00:07:45.717 ************************************ 00:07:45.717 END TEST thread_poller_perf 00:07:45.717 ************************************ 00:07:45.717 00:07:45.717 real 0m1.594s 00:07:45.717 user 0m1.388s 00:07:45.717 sys 0m0.098s 00:07:45.717 03:18:08 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.717 03:18:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:45.717 03:18:08 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:45.717 03:18:08 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:45.717 03:18:08 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.717 03:18:08 thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.717 ************************************ 00:07:45.717 START TEST thread_poller_perf 00:07:45.717 ************************************ 00:07:45.717 03:18:08 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:45.717 [2024-11-05 03:18:09.043089] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:07:45.717 [2024-11-05 03:18:09.043197] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60752 ] 00:07:45.717 [2024-11-05 03:18:09.223914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.976 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:45.976 [2024-11-05 03:18:09.341430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.377 [2024-11-05T03:18:10.961Z] ====================================== 00:07:47.377 [2024-11-05T03:18:10.961Z] busy:2494076230 (cyc) 00:07:47.377 [2024-11-05T03:18:10.961Z] total_run_count: 5108000 00:07:47.377 [2024-11-05T03:18:10.961Z] tsc_hz: 2490000000 (cyc) 00:07:47.377 [2024-11-05T03:18:10.961Z] ====================================== 00:07:47.377 [2024-11-05T03:18:10.961Z] poller_cost: 488 (cyc), 195 (nsec) 00:07:47.377 ************************************ 00:07:47.377 END TEST thread_poller_perf 00:07:47.377 ************************************ 00:07:47.377 00:07:47.377 real 0m1.581s 00:07:47.377 user 0m1.368s 00:07:47.377 sys 0m0.105s 00:07:47.377 03:18:10 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.377 03:18:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:47.377 03:18:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:47.377 ************************************ 00:07:47.377 END TEST thread 00:07:47.377 ************************************ 00:07:47.377 00:07:47.377 real 0m3.550s 00:07:47.377 user 0m2.910s 00:07:47.377 sys 0m0.428s 00:07:47.377 03:18:10 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.377 03:18:10 thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.377 03:18:10 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:47.377 03:18:10 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:47.377 03:18:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:47.377 03:18:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:47.377 03:18:10 -- common/autotest_common.sh@10 -- # set +x 00:07:47.377 ************************************ 00:07:47.377 START TEST app_cmdline 00:07:47.377 ************************************ 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:47.377 * Looking for test storage... 00:07:47.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.377 03:18:10 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:47.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.377 --rc genhtml_branch_coverage=1 00:07:47.377 --rc genhtml_function_coverage=1 00:07:47.377 --rc genhtml_legend=1 00:07:47.377 --rc geninfo_all_blocks=1 00:07:47.377 --rc geninfo_unexecuted_blocks=1 00:07:47.377 00:07:47.377 ' 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:47.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.377 --rc genhtml_branch_coverage=1 00:07:47.377 --rc genhtml_function_coverage=1 00:07:47.377 --rc genhtml_legend=1 00:07:47.377 --rc geninfo_all_blocks=1 00:07:47.377 --rc geninfo_unexecuted_blocks=1 00:07:47.377 00:07:47.377 ' 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:47.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.377 --rc genhtml_branch_coverage=1 00:07:47.377 --rc genhtml_function_coverage=1 00:07:47.377 --rc genhtml_legend=1 00:07:47.377 --rc geninfo_all_blocks=1 00:07:47.377 --rc geninfo_unexecuted_blocks=1 00:07:47.377 00:07:47.377 ' 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:47.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.377 --rc genhtml_branch_coverage=1 00:07:47.377 --rc genhtml_function_coverage=1 00:07:47.377 --rc genhtml_legend=1 00:07:47.377 --rc geninfo_all_blocks=1 00:07:47.377 --rc geninfo_unexecuted_blocks=1 00:07:47.377 00:07:47.377 ' 00:07:47.377 03:18:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:47.377 03:18:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60841 00:07:47.377 03:18:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:47.377 03:18:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60841 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 60841 ']' 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:47.377 03:18:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:47.637 [2024-11-05 03:18:11.032005] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:07:47.637 [2024-11-05 03:18:11.032365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60841 ] 00:07:47.637 [2024-11-05 03:18:11.216423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.896 [2024-11-05 03:18:11.338701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.833 03:18:12 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:48.833 03:18:12 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:48.833 03:18:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:48.833 { 00:07:48.833 "version": "SPDK v25.01-pre git sha1 a46541aa1", 00:07:48.833 "fields": { 00:07:48.833 "major": 25, 00:07:48.833 "minor": 1, 00:07:48.833 "patch": 0, 00:07:48.833 "suffix": "-pre", 00:07:48.833 "commit": "a46541aa1" 00:07:48.833 } 00:07:48.833 } 00:07:48.833 03:18:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:48.833 03:18:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:48.833 03:18:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:48.833 03:18:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:48.833 03:18:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:48.833 03:18:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:48.833 03:18:12 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.833 03:18:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:48.833 03:18:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.092 03:18:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:49.092 03:18:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:49.092 03:18:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.092 request: 00:07:49.092 { 00:07:49.092 "method": "env_dpdk_get_mem_stats", 00:07:49.092 "req_id": 1 00:07:49.092 } 00:07:49.092 Got JSON-RPC error response 00:07:49.092 response: 00:07:49.092 { 00:07:49.092 "code": -32601, 00:07:49.092 "message": "Method not found" 00:07:49.092 } 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.092 03:18:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60841 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 60841 ']' 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 60841 00:07:49.092 03:18:12 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:49.351 03:18:12 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:49.351 03:18:12 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60841 00:07:49.351 killing process with pid 60841 00:07:49.351 03:18:12 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:49.351 03:18:12 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:49.351 03:18:12 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60841' 00:07:49.351 03:18:12 app_cmdline -- common/autotest_common.sh@971 -- # kill 60841 00:07:49.351 03:18:12 app_cmdline -- common/autotest_common.sh@976 -- # wait 60841 00:07:51.887 ************************************ 00:07:51.887 END TEST app_cmdline 00:07:51.887 ************************************ 00:07:51.887 00:07:51.887 real 0m4.419s 00:07:51.887 user 0m4.634s 00:07:51.887 sys 0m0.658s 00:07:51.887 03:18:15 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.887 03:18:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:51.887 03:18:15 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:51.887 03:18:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.887 03:18:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.887 03:18:15 -- common/autotest_common.sh@10 -- # set +x 00:07:51.887 ************************************ 00:07:51.887 START TEST version 00:07:51.887 ************************************ 00:07:51.887 03:18:15 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:51.887 * Looking for test storage... 00:07:51.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:51.887 03:18:15 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:51.887 03:18:15 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:51.887 03:18:15 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:51.887 03:18:15 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:51.887 03:18:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.887 03:18:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.887 03:18:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.887 03:18:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.887 03:18:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.887 03:18:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.887 03:18:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.887 03:18:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.887 03:18:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.887 03:18:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.887 03:18:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.887 03:18:15 version -- scripts/common.sh@344 -- # case "$op" in 00:07:51.887 03:18:15 version -- scripts/common.sh@345 -- # : 1 00:07:51.887 03:18:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.887 03:18:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.887 03:18:15 version -- scripts/common.sh@365 -- # decimal 1 00:07:51.887 03:18:15 version -- scripts/common.sh@353 -- # local d=1 00:07:51.887 03:18:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.887 03:18:15 version -- scripts/common.sh@355 -- # echo 1 00:07:51.887 03:18:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.887 03:18:15 version -- scripts/common.sh@366 -- # decimal 2 00:07:51.887 03:18:15 version -- scripts/common.sh@353 -- # local d=2 00:07:51.887 03:18:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.887 03:18:15 version -- scripts/common.sh@355 -- # echo 2 00:07:51.887 03:18:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.887 03:18:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.887 03:18:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.887 03:18:15 version -- scripts/common.sh@368 -- # return 0 00:07:51.887 03:18:15 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.887 03:18:15 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:51.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.887 --rc genhtml_branch_coverage=1 00:07:51.887 --rc genhtml_function_coverage=1 00:07:51.887 --rc genhtml_legend=1 00:07:51.887 --rc geninfo_all_blocks=1 00:07:51.887 --rc geninfo_unexecuted_blocks=1 00:07:51.887 00:07:51.887 ' 00:07:51.887 03:18:15 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:51.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.887 --rc genhtml_branch_coverage=1 00:07:51.887 --rc genhtml_function_coverage=1 00:07:51.887 --rc genhtml_legend=1 00:07:51.887 --rc geninfo_all_blocks=1 00:07:51.887 --rc geninfo_unexecuted_blocks=1 00:07:51.887 00:07:51.887 ' 00:07:51.887 03:18:15 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:51.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.887 --rc genhtml_branch_coverage=1 00:07:51.887 --rc genhtml_function_coverage=1 00:07:51.887 --rc genhtml_legend=1 00:07:51.887 --rc geninfo_all_blocks=1 00:07:51.887 --rc geninfo_unexecuted_blocks=1 00:07:51.887 00:07:51.887 ' 00:07:51.887 03:18:15 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:51.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.887 --rc genhtml_branch_coverage=1 00:07:51.887 --rc genhtml_function_coverage=1 00:07:51.887 --rc genhtml_legend=1 00:07:51.887 --rc geninfo_all_blocks=1 00:07:51.887 --rc geninfo_unexecuted_blocks=1 00:07:51.887 00:07:51.887 ' 00:07:51.887 03:18:15 version -- app/version.sh@17 -- # get_header_version major 00:07:51.887 03:18:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:51.887 03:18:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:51.887 03:18:15 version -- app/version.sh@14 -- # cut -f2 00:07:51.887 03:18:15 version -- app/version.sh@17 -- # major=25 00:07:51.887 03:18:15 version -- app/version.sh@18 -- # get_header_version minor 00:07:51.887 03:18:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:51.887 03:18:15 version -- app/version.sh@14 -- # cut -f2 00:07:51.887 03:18:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:51.887 03:18:15 version -- app/version.sh@18 -- # minor=1 00:07:51.887 03:18:15 version -- app/version.sh@19 -- # get_header_version patch 00:07:51.887 03:18:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:51.887 03:18:15 version -- app/version.sh@14 -- # cut -f2 00:07:51.887 03:18:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:51.887 03:18:15 version -- app/version.sh@19 -- # patch=0 00:07:51.887 03:18:15 version -- app/version.sh@20 -- # get_header_version suffix 00:07:51.887 03:18:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:51.887 03:18:15 version -- app/version.sh@14 -- # cut -f2 00:07:51.887 03:18:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:51.887 03:18:15 version -- app/version.sh@20 -- # suffix=-pre 00:07:51.887 03:18:15 version -- app/version.sh@22 -- # version=25.1 00:07:52.146 03:18:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:52.146 03:18:15 version -- app/version.sh@28 -- # version=25.1rc0 00:07:52.146 03:18:15 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:52.146 03:18:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:52.146 03:18:15 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:52.146 03:18:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:52.146 ************************************ 00:07:52.146 END TEST version 00:07:52.146 ************************************ 00:07:52.146 00:07:52.146 real 0m0.326s 00:07:52.146 user 0m0.171s 00:07:52.146 sys 0m0.207s 00:07:52.146 03:18:15 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:52.146 03:18:15 version -- common/autotest_common.sh@10 -- # set +x 00:07:52.146 03:18:15 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:52.146 03:18:15 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:52.146 03:18:15 -- spdk/autotest.sh@194 -- # uname -s 00:07:52.146 03:18:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:52.147 03:18:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:52.147 03:18:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:52.147 03:18:15 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:52.147 03:18:15 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:52.147 03:18:15 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:52.147 03:18:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:52.147 03:18:15 -- common/autotest_common.sh@10 -- # set +x 00:07:52.147 ************************************ 00:07:52.147 START TEST blockdev_nvme 00:07:52.147 ************************************ 00:07:52.147 03:18:15 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:52.147 * Looking for test storage... 00:07:52.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:52.406 03:18:15 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:52.406 03:18:15 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:07:52.406 03:18:15 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:52.406 03:18:15 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.406 03:18:15 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:52.406 03:18:15 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.406 03:18:15 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:52.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.406 --rc genhtml_branch_coverage=1 00:07:52.406 --rc genhtml_function_coverage=1 00:07:52.406 --rc genhtml_legend=1 00:07:52.406 --rc geninfo_all_blocks=1 00:07:52.406 --rc geninfo_unexecuted_blocks=1 00:07:52.406 00:07:52.406 ' 00:07:52.406 03:18:15 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:52.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.406 --rc genhtml_branch_coverage=1 00:07:52.407 --rc genhtml_function_coverage=1 00:07:52.407 --rc genhtml_legend=1 00:07:52.407 --rc geninfo_all_blocks=1 00:07:52.407 --rc geninfo_unexecuted_blocks=1 00:07:52.407 00:07:52.407 ' 00:07:52.407 03:18:15 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:52.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.407 --rc genhtml_branch_coverage=1 00:07:52.407 --rc genhtml_function_coverage=1 00:07:52.407 --rc genhtml_legend=1 00:07:52.407 --rc geninfo_all_blocks=1 00:07:52.407 --rc geninfo_unexecuted_blocks=1 00:07:52.407 00:07:52.407 ' 00:07:52.407 03:18:15 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:52.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.407 --rc genhtml_branch_coverage=1 00:07:52.407 --rc genhtml_function_coverage=1 00:07:52.407 --rc genhtml_legend=1 00:07:52.407 --rc geninfo_all_blocks=1 00:07:52.407 --rc geninfo_unexecuted_blocks=1 00:07:52.407 00:07:52.407 ' 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:52.407 03:18:15 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61035 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:52.407 03:18:15 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61035 00:07:52.407 03:18:15 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 61035 ']' 00:07:52.407 03:18:15 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.407 03:18:15 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:52.407 03:18:15 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.407 03:18:15 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:52.407 03:18:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.407 [2024-11-05 03:18:15.968584] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:07:52.407 [2024-11-05 03:18:15.968924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61035 ] 00:07:52.666 [2024-11-05 03:18:16.162251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.925 [2024-11-05 03:18:16.272489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.862 03:18:17 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:53.862 03:18:17 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:07:53.862 03:18:17 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:53.862 03:18:17 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:53.862 03:18:17 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:53.862 03:18:17 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:53.862 03:18:17 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:53.862 03:18:17 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:53.862 03:18:17 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.862 03:18:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.122 03:18:17 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.122 03:18:17 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:54.122 03:18:17 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.122 03:18:17 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.122 03:18:17 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.122 03:18:17 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:54.122 03:18:17 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:54.122 03:18:17 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.122 03:18:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.381 03:18:17 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.381 03:18:17 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:54.382 03:18:17 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:54.382 03:18:17 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ac15af64-695d-4100-8d13-bee9760482ae"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ac15af64-695d-4100-8d13-bee9760482ae",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "8065775f-c965-4ed0-b2e8-fdcc9d90a726"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8065775f-c965-4ed0-b2e8-fdcc9d90a726",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "1cdd43ac-23b4-4bb9-bc05-7d88a5effa3e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1cdd43ac-23b4-4bb9-bc05-7d88a5effa3e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "fe3e01ad-ce3a-4c2f-ae3c-0b196afa958d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fe3e01ad-ce3a-4c2f-ae3c-0b196afa958d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "7cf96b08-2f8c-4374-93d8-a1c1bce19623"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7cf96b08-2f8c-4374-93d8-a1c1bce19623",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ea58b694-bf96-40c7-b153-4ae81f7170ee"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ea58b694-bf96-40c7-b153-4ae81f7170ee",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:54.382 03:18:17 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:54.382 03:18:17 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:54.382 03:18:17 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:54.382 03:18:17 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61035 00:07:54.382 03:18:17 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 61035 ']' 00:07:54.382 03:18:17 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 61035 00:07:54.382 03:18:17 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:07:54.382 03:18:17 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:54.382 03:18:17 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61035 00:07:54.382 killing process with pid 61035 00:07:54.382 03:18:17 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:54.382 03:18:17 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:54.382 03:18:17 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61035' 00:07:54.382 03:18:17 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 61035 00:07:54.382 03:18:17 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 61035 00:07:56.918 03:18:20 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:56.918 03:18:20 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:56.918 03:18:20 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:07:56.918 03:18:20 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:56.918 03:18:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.918 ************************************ 00:07:56.918 START TEST bdev_hello_world 00:07:56.918 ************************************ 00:07:56.918 03:18:20 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:56.918 [2024-11-05 03:18:20.312365] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:07:56.918 [2024-11-05 03:18:20.312492] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61130 ] 00:07:56.918 [2024-11-05 03:18:20.494638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.177 [2024-11-05 03:18:20.612187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.753 [2024-11-05 03:18:21.267324] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:57.753 [2024-11-05 03:18:21.267396] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:57.753 [2024-11-05 03:18:21.267422] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:57.753 [2024-11-05 03:18:21.270817] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:57.753 [2024-11-05 03:18:21.271471] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:57.753 [2024-11-05 03:18:21.271502] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:57.753 [2024-11-05 03:18:21.271691] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:57.753 00:07:57.753 [2024-11-05 03:18:21.271720] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:59.131 00:07:59.131 real 0m2.287s 00:07:59.131 user 0m1.907s 00:07:59.131 sys 0m0.271s 00:07:59.131 03:18:22 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.131 03:18:22 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:59.131 ************************************ 00:07:59.131 END TEST bdev_hello_world 00:07:59.131 ************************************ 00:07:59.131 03:18:22 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:59.131 03:18:22 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:59.131 03:18:22 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:59.131 03:18:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.131 ************************************ 00:07:59.131 START TEST bdev_bounds 00:07:59.131 ************************************ 00:07:59.131 03:18:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:07:59.131 03:18:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61178 00:07:59.131 03:18:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:59.131 03:18:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:59.131 Process bdevio pid: 61178 00:07:59.131 03:18:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61178' 00:07:59.131 03:18:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61178 00:07:59.131 03:18:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61178 ']' 00:07:59.131 03:18:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.131 03:18:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:59.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.131 03:18:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.131 03:18:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:59.132 03:18:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:59.132 [2024-11-05 03:18:22.687989] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:07:59.132 [2024-11-05 03:18:22.688874] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61178 ] 00:07:59.391 [2024-11-05 03:18:22.881170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.651 [2024-11-05 03:18:23.035910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.651 [2024-11-05 03:18:23.036086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.651 [2024-11-05 03:18:23.036644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.229 03:18:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:00.229 03:18:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:08:00.229 03:18:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:00.488 I/O targets: 00:08:00.488 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:00.488 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:00.488 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:00.488 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:00.488 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:00.489 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:00.489 00:08:00.489 00:08:00.489 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.489 http://cunit.sourceforge.net/ 00:08:00.489 00:08:00.489 00:08:00.489 Suite: bdevio tests on: Nvme3n1 00:08:00.489 Test: blockdev write read block ...passed 00:08:00.489 Test: blockdev write zeroes read block ...passed 00:08:00.489 Test: blockdev write zeroes read no split ...passed 00:08:00.489 Test: blockdev write zeroes read split ...passed 00:08:00.489 Test: blockdev write zeroes read split partial ...passed 00:08:00.489 Test: blockdev reset ...[2024-11-05 03:18:23.970174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:00.489 [2024-11-05 03:18:23.974451] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:08:00.489 Test: blockdev write read 8 blocks ...uccessful. 00:08:00.489 passed 00:08:00.489 Test: blockdev write read size > 128k ...passed 00:08:00.489 Test: blockdev write read invalid size ...passed 00:08:00.489 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.489 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.489 Test: blockdev write read max offset ...passed 00:08:00.489 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.489 Test: blockdev writev readv 8 blocks ...passed 00:08:00.489 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.489 Test: blockdev writev readv block ...passed 00:08:00.489 Test: blockdev writev readv size > 128k ...passed 00:08:00.489 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.489 Test: blockdev comparev and writev ...[2024-11-05 03:18:23.984512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf60a000 len:0x1000 00:08:00.489 [2024-11-05 03:18:23.984571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.489 passed 00:08:00.489 Test: blockdev nvme passthru rw ...passed 00:08:00.489 Test: blockdev nvme passthru vendor specific ...[2024-11-05 03:18:23.985559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.489 [2024-11-05 03:18:23.985594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.489 passed 00:08:00.489 Test: blockdev nvme admin passthru ...passed 00:08:00.489 Test: blockdev copy ...passed 00:08:00.489 Suite: bdevio tests on: Nvme2n3 00:08:00.489 Test: blockdev write read block ...passed 00:08:00.489 Test: blockdev write zeroes read block ...passed 00:08:00.489 Test: blockdev write zeroes read no split ...passed 00:08:00.489 Test: blockdev write zeroes read split ...passed 00:08:00.489 Test: blockdev write zeroes read split partial ...passed 00:08:00.489 Test: blockdev reset ...[2024-11-05 03:18:24.061656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:00.489 [2024-11-05 03:18:24.066331] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:08:00.489 00:08:00.489 Test: blockdev write read 8 blocks ...passed 00:08:00.489 Test: blockdev write read size > 128k ...passed 00:08:00.489 Test: blockdev write read invalid size ...passed 00:08:00.489 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.489 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.489 Test: blockdev write read max offset ...passed 00:08:00.489 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.489 Test: blockdev writev readv 8 blocks ...passed 00:08:00.489 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.489 Test: blockdev writev readv block ...passed 00:08:00.748 Test: blockdev writev readv size > 128k ...passed 00:08:00.748 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.748 Test: blockdev comparev and writev ...[2024-11-05 03:18:24.075960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a2806000 len:0x1000 00:08:00.748 [2024-11-05 03:18:24.076021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.748 passed 00:08:00.748 Test: blockdev nvme passthru rw ...passed 00:08:00.748 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.748 Test: blockdev nvme admin passthru ...[2024-11-05 03:18:24.076939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.748 [2024-11-05 03:18:24.076978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.748 passed 00:08:00.748 Test: blockdev copy ...passed 00:08:00.748 Suite: bdevio tests on: Nvme2n2 00:08:00.748 Test: blockdev write read block ...passed 00:08:00.748 Test: blockdev write zeroes read block ...passed 00:08:00.748 Test: blockdev write zeroes read no split ...passed 00:08:00.748 Test: blockdev write zeroes read split ...passed 00:08:00.748 Test: blockdev write zeroes read split partial ...passed 00:08:00.748 Test: blockdev reset ...[2024-11-05 03:18:24.154303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:00.748 [2024-11-05 03:18:24.159304] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:08:00.748 00:08:00.748 Test: blockdev write read 8 blocks ...passed 00:08:00.748 Test: blockdev write read size > 128k ...passed 00:08:00.748 Test: blockdev write read invalid size ...passed 00:08:00.748 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.748 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.748 Test: blockdev write read max offset ...passed 00:08:00.748 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.748 Test: blockdev writev readv 8 blocks ...passed 00:08:00.748 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.748 Test: blockdev writev readv block ...passed 00:08:00.748 Test: blockdev writev readv size > 128k ...passed 00:08:00.748 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.748 Test: blockdev comparev and writev ...[2024-11-05 03:18:24.170058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dae3c000 len:0x1000 00:08:00.748 passed 00:08:00.748 Test: blockdev nvme passthru rw ...[2024-11-05 03:18:24.170170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.748 passed 00:08:00.748 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.748 Test: blockdev nvme admin passthru ...[2024-11-05 03:18:24.171531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.748 [2024-11-05 03:18:24.171570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.748 passed 00:08:00.748 Test: blockdev copy ...passed 00:08:00.748 Suite: bdevio tests on: Nvme2n1 00:08:00.748 Test: blockdev write read block ...passed 00:08:00.748 Test: blockdev write zeroes read block ...passed 00:08:00.748 Test: blockdev write zeroes read no split ...passed 00:08:00.748 Test: blockdev write zeroes read split ...passed 00:08:00.748 Test: blockdev write zeroes read split partial ...passed 00:08:00.748 Test: blockdev reset ...[2024-11-05 03:18:24.249036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:00.748 passed 00:08:00.748 Test: blockdev write read 8 blocks ...[2024-11-05 03:18:24.253665] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:00.748 passed 00:08:00.748 Test: blockdev write read size > 128k ...passed 00:08:00.748 Test: blockdev write read invalid size ...passed 00:08:00.748 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.748 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.748 Test: blockdev write read max offset ...passed 00:08:00.748 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.748 Test: blockdev writev readv 8 blocks ...passed 00:08:00.748 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.748 Test: blockdev writev readv block ...passed 00:08:00.748 Test: blockdev writev readv size > 128k ...passed 00:08:00.748 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.748 Test: blockdev comparev and writev ...[2024-11-05 03:18:24.263950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dae38000 len:0x1000 00:08:00.748 [2024-11-05 03:18:24.264011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.748 passed 00:08:00.748 Test: blockdev nvme passthru rw ...passed 00:08:00.748 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.748 Test: blockdev nvme admin passthru ...[2024-11-05 03:18:24.264924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.748 [2024-11-05 03:18:24.264960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.749 passed 00:08:00.749 Test: blockdev copy ...passed 00:08:00.749 Suite: bdevio tests on: Nvme1n1 00:08:00.749 Test: blockdev write read block ...passed 00:08:00.749 Test: blockdev write zeroes read block ...passed 00:08:00.749 Test: blockdev write zeroes read no split ...passed 00:08:00.749 Test: blockdev write zeroes read split ...passed 00:08:01.008 Test: blockdev write zeroes read split partial ...passed 00:08:01.008 Test: blockdev reset ...[2024-11-05 03:18:24.345179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:01.008 [2024-11-05 03:18:24.349371] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:01.008 passed 00:08:01.008 Test: blockdev write read 8 blocks ...passed 00:08:01.008 Test: blockdev write read size > 128k ...passed 00:08:01.008 Test: blockdev write read invalid size ...passed 00:08:01.008 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.008 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.008 Test: blockdev write read max offset ...passed 00:08:01.008 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.008 Test: blockdev writev readv 8 blocks ...passed 00:08:01.008 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.008 Test: blockdev writev readv block ...passed 00:08:01.008 Test: blockdev writev readv size > 128k ...passed 00:08:01.008 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.008 Test: blockdev comparev and writev ...[2024-11-05 03:18:24.358482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dae34000 len:0x1000 00:08:01.008 [2024-11-05 03:18:24.358547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.008 passed 00:08:01.008 Test: blockdev nvme passthru rw ...passed 00:08:01.008 Test: blockdev nvme passthru vendor specific ...[2024-11-05 03:18:24.359598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.008 [2024-11-05 03:18:24.359640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.008 passed 00:08:01.008 Test: blockdev nvme admin passthru ...passed 00:08:01.008 Test: blockdev copy ...passed 00:08:01.008 Suite: bdevio tests on: Nvme0n1 00:08:01.008 Test: blockdev write read block ...passed 00:08:01.008 Test: blockdev write zeroes read block ...passed 00:08:01.008 Test: blockdev write zeroes read no split ...passed 00:08:01.008 Test: blockdev write zeroes read split ...passed 00:08:01.008 Test: blockdev write zeroes read split partial ...passed 00:08:01.008 Test: blockdev reset ...[2024-11-05 03:18:24.438331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:01.008 passed 00:08:01.008 Test: blockdev write read 8 blocks ...[2024-11-05 03:18:24.442650] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:01.008 passed 00:08:01.008 Test: blockdev write read size > 128k ...passed 00:08:01.008 Test: blockdev write read invalid size ...passed 00:08:01.008 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.008 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.008 Test: blockdev write read max offset ...passed 00:08:01.008 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.008 Test: blockdev writev readv 8 blocks ...passed 00:08:01.008 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.008 Test: blockdev writev readv block ...passed 00:08:01.008 Test: blockdev writev readv size > 128k ...passed 00:08:01.008 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.008 Test: blockdev comparev and writev ...[2024-11-05 03:18:24.450727] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:01.008 separate metadata which is not supported yet. 00:08:01.008 passed 00:08:01.008 Test: blockdev nvme passthru rw ...passed 00:08:01.008 Test: blockdev nvme passthru vendor specific ...passed 00:08:01.008 Test: blockdev nvme admin passthru ...[2024-11-05 03:18:24.451485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:01.008 [2024-11-05 03:18:24.451544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:01.008 passed 00:08:01.008 Test: blockdev copy ...passed 00:08:01.008 00:08:01.008 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.008 suites 6 6 n/a 0 0 00:08:01.008 tests 138 138 138 0 0 00:08:01.008 asserts 893 893 893 0 n/a 00:08:01.008 00:08:01.008 Elapsed time = 1.503 seconds 00:08:01.008 0 00:08:01.008 03:18:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61178 00:08:01.008 03:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61178 ']' 00:08:01.008 03:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61178 00:08:01.008 03:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:08:01.008 03:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:01.008 03:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61178 00:08:01.008 03:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:01.008 killing process with pid 61178 00:08:01.008 03:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:01.008 03:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61178' 00:08:01.008 03:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61178 00:08:01.008 03:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61178 00:08:02.384 03:18:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:02.384 00:08:02.384 real 0m3.101s 00:08:02.384 user 0m7.721s 00:08:02.384 sys 0m0.558s 00:08:02.384 03:18:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:02.384 03:18:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:02.384 ************************************ 00:08:02.384 END TEST bdev_bounds 00:08:02.384 ************************************ 00:08:02.384 03:18:25 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:02.384 03:18:25 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:02.384 03:18:25 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:02.384 03:18:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.384 ************************************ 00:08:02.384 START TEST bdev_nbd 00:08:02.384 ************************************ 00:08:02.384 03:18:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:02.384 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:02.384 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:02.384 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.384 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:02.384 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.384 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:02.384 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:02.384 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61243 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61243 /var/tmp/spdk-nbd.sock 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61243 ']' 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:02.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:02.385 03:18:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:02.385 [2024-11-05 03:18:25.863002] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:08:02.385 [2024-11-05 03:18:25.863135] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.644 [2024-11-05 03:18:26.048103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.644 [2024-11-05 03:18:26.166690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:03.581 03:18:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.581 1+0 records in 00:08:03.581 1+0 records out 00:08:03.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731154 s, 5.6 MB/s 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:03.581 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:03.840 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:03.840 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:03.840 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:03.840 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:03.840 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:03.840 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:03.840 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:03.840 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:03.840 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:03.840 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:03.840 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:03.841 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.841 1+0 records in 00:08:03.841 1+0 records out 00:08:03.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737216 s, 5.6 MB/s 00:08:03.841 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.841 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:03.841 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.841 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:03.841 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:03.841 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.841 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:03.841 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.100 1+0 records in 00:08:04.100 1+0 records out 00:08:04.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780145 s, 5.3 MB/s 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:04.100 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:04.358 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:04.358 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:04.358 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:04.358 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:08:04.358 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:04.358 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:04.358 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:04.358 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:08:04.358 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:04.358 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:04.358 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:04.358 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.618 1+0 records in 00:08:04.618 1+0 records out 00:08:04.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000801738 s, 5.1 MB/s 00:08:04.618 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.618 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:04.618 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.618 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:04.618 03:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:04.618 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.618 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:04.618 03:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.877 1+0 records in 00:08:04.877 1+0 records out 00:08:04.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00146713 s, 2.8 MB/s 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:04.877 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.136 1+0 records in 00:08:05.136 1+0 records out 00:08:05.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640033 s, 6.4 MB/s 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:05.136 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:05.395 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:05.395 { 00:08:05.395 "nbd_device": "/dev/nbd0", 00:08:05.395 "bdev_name": "Nvme0n1" 00:08:05.395 }, 00:08:05.395 { 00:08:05.395 "nbd_device": "/dev/nbd1", 00:08:05.395 "bdev_name": "Nvme1n1" 00:08:05.395 }, 00:08:05.395 { 00:08:05.395 "nbd_device": "/dev/nbd2", 00:08:05.395 "bdev_name": "Nvme2n1" 00:08:05.395 }, 00:08:05.395 { 00:08:05.395 "nbd_device": "/dev/nbd3", 00:08:05.395 "bdev_name": "Nvme2n2" 00:08:05.395 }, 00:08:05.395 { 00:08:05.395 "nbd_device": "/dev/nbd4", 00:08:05.395 "bdev_name": "Nvme2n3" 00:08:05.395 }, 00:08:05.395 { 00:08:05.395 "nbd_device": "/dev/nbd5", 00:08:05.395 "bdev_name": "Nvme3n1" 00:08:05.395 } 00:08:05.395 ]' 00:08:05.395 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:05.395 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:05.395 { 00:08:05.395 "nbd_device": "/dev/nbd0", 00:08:05.395 "bdev_name": "Nvme0n1" 00:08:05.395 }, 00:08:05.395 { 00:08:05.395 "nbd_device": "/dev/nbd1", 00:08:05.395 "bdev_name": "Nvme1n1" 00:08:05.395 }, 00:08:05.395 { 00:08:05.395 "nbd_device": "/dev/nbd2", 00:08:05.395 "bdev_name": "Nvme2n1" 00:08:05.395 }, 00:08:05.395 { 00:08:05.395 "nbd_device": "/dev/nbd3", 00:08:05.395 "bdev_name": "Nvme2n2" 00:08:05.395 }, 00:08:05.395 { 00:08:05.395 "nbd_device": "/dev/nbd4", 00:08:05.395 "bdev_name": "Nvme2n3" 00:08:05.395 }, 00:08:05.395 { 00:08:05.395 "nbd_device": "/dev/nbd5", 00:08:05.395 "bdev_name": "Nvme3n1" 00:08:05.395 } 00:08:05.395 ]' 00:08:05.395 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:05.395 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:05.395 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.395 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:05.395 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:05.395 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:05.395 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.395 03:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:05.654 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:05.654 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:05.654 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:05.654 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.654 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.654 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:05.654 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.654 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.654 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.654 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:05.919 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:05.919 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:05.919 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:05.919 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.919 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.919 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:05.919 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.919 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.919 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.919 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:05.919 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.178 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:06.437 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:06.437 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:06.437 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:06.437 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.437 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.437 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:06.437 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.437 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.437 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.437 03:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:06.696 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:06.696 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:06.696 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:06.696 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.696 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.696 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:06.696 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.696 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.696 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.696 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.696 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:06.955 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:06.956 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:06.956 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:06.956 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:06.956 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:06.956 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:06.956 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:07.214 /dev/nbd0 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.214 1+0 records in 00:08:07.214 1+0 records out 00:08:07.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708865 s, 5.8 MB/s 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:07.214 03:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:07.473 /dev/nbd1 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.473 1+0 records in 00:08:07.473 1+0 records out 00:08:07.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064671 s, 6.3 MB/s 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:07.473 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:07.732 /dev/nbd10 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.732 1+0 records in 00:08:07.732 1+0 records out 00:08:07.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000749169 s, 5.5 MB/s 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:07.732 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:07.990 /dev/nbd11 00:08:07.990 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:07.990 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:07.990 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:08:07.990 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:07.990 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:07.991 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:07.991 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:08:07.991 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:07.991 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:07.991 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:07.991 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.249 1+0 records in 00:08:08.249 1+0 records out 00:08:08.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000726599 s, 5.6 MB/s 00:08:08.249 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.249 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:08.249 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.249 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:08.249 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:08.249 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.249 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:08.249 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:08.249 /dev/nbd12 00:08:08.249 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:08.249 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:08.249 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:08:08.249 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:08.249 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:08.507 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:08.507 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:08:08.507 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:08.507 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:08.507 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:08.507 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.507 1+0 records in 00:08:08.507 1+0 records out 00:08:08.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624634 s, 6.6 MB/s 00:08:08.507 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.507 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:08.507 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.507 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:08.507 03:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:08.507 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.507 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:08.507 03:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:08.507 /dev/nbd13 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.766 1+0 records in 00:08:08.766 1+0 records out 00:08:08.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078597 s, 5.2 MB/s 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.766 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:09.034 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:09.034 { 00:08:09.034 "nbd_device": "/dev/nbd0", 00:08:09.034 "bdev_name": "Nvme0n1" 00:08:09.034 }, 00:08:09.034 { 00:08:09.034 "nbd_device": "/dev/nbd1", 00:08:09.034 "bdev_name": "Nvme1n1" 00:08:09.034 }, 00:08:09.034 { 00:08:09.034 "nbd_device": "/dev/nbd10", 00:08:09.034 "bdev_name": "Nvme2n1" 00:08:09.034 }, 00:08:09.034 { 00:08:09.034 "nbd_device": "/dev/nbd11", 00:08:09.034 "bdev_name": "Nvme2n2" 00:08:09.034 }, 00:08:09.034 { 00:08:09.034 "nbd_device": "/dev/nbd12", 00:08:09.034 "bdev_name": "Nvme2n3" 00:08:09.034 }, 00:08:09.034 { 00:08:09.034 "nbd_device": "/dev/nbd13", 00:08:09.034 "bdev_name": "Nvme3n1" 00:08:09.034 } 00:08:09.034 ]' 00:08:09.034 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:09.034 { 00:08:09.034 "nbd_device": "/dev/nbd0", 00:08:09.034 "bdev_name": "Nvme0n1" 00:08:09.034 }, 00:08:09.034 { 00:08:09.034 "nbd_device": "/dev/nbd1", 00:08:09.034 "bdev_name": "Nvme1n1" 00:08:09.034 }, 00:08:09.034 { 00:08:09.034 "nbd_device": "/dev/nbd10", 00:08:09.034 "bdev_name": "Nvme2n1" 00:08:09.034 }, 00:08:09.034 { 00:08:09.034 "nbd_device": "/dev/nbd11", 00:08:09.034 "bdev_name": "Nvme2n2" 00:08:09.034 }, 00:08:09.034 { 00:08:09.034 "nbd_device": "/dev/nbd12", 00:08:09.034 "bdev_name": "Nvme2n3" 00:08:09.034 }, 00:08:09.034 { 00:08:09.034 "nbd_device": "/dev/nbd13", 00:08:09.034 "bdev_name": "Nvme3n1" 00:08:09.034 } 00:08:09.034 ]' 00:08:09.034 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:09.034 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:09.034 /dev/nbd1 00:08:09.034 /dev/nbd10 00:08:09.034 /dev/nbd11 00:08:09.034 /dev/nbd12 00:08:09.034 /dev/nbd13' 00:08:09.034 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:09.034 /dev/nbd1 00:08:09.034 /dev/nbd10 00:08:09.034 /dev/nbd11 00:08:09.034 /dev/nbd12 00:08:09.034 /dev/nbd13' 00:08:09.034 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:09.034 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:09.035 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:09.035 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:09.035 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:09.035 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:09.035 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:09.035 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:09.035 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:09.035 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:09.035 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:09.035 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:09.035 256+0 records in 00:08:09.035 256+0 records out 00:08:09.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138335 s, 75.8 MB/s 00:08:09.035 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.035 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:09.035 256+0 records in 00:08:09.035 256+0 records out 00:08:09.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131661 s, 8.0 MB/s 00:08:09.035 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.035 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:09.313 256+0 records in 00:08:09.313 256+0 records out 00:08:09.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133594 s, 7.8 MB/s 00:08:09.313 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.313 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:09.313 256+0 records in 00:08:09.313 256+0 records out 00:08:09.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135963 s, 7.7 MB/s 00:08:09.313 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.313 03:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:09.571 256+0 records in 00:08:09.571 256+0 records out 00:08:09.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134753 s, 7.8 MB/s 00:08:09.571 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.571 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:09.571 256+0 records in 00:08:09.571 256+0 records out 00:08:09.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137999 s, 7.6 MB/s 00:08:09.571 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.571 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:09.831 256+0 records in 00:08:09.831 256+0 records out 00:08:09.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130927 s, 8.0 MB/s 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.831 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:10.090 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:10.090 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:10.090 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:10.090 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.090 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.090 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:10.090 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.090 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.090 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.090 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:10.349 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:10.349 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:10.350 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:10.350 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.350 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.350 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:10.350 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.350 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.350 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.350 03:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:10.609 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:10.609 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:10.609 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:10.609 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.609 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.609 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:10.609 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.609 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.609 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.609 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:10.868 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:10.868 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:10.868 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:10.868 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.868 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.868 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:10.868 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.868 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.868 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.868 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:11.127 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:11.127 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:11.127 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:11.127 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.127 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.127 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:11.127 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.127 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.127 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.127 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:11.387 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:11.387 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:11.387 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:11.387 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.387 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.387 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:11.387 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.387 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.387 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:11.387 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.387 03:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:11.646 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:11.905 malloc_lvol_verify 00:08:11.905 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:12.175 04882222-3010-4d36-a0ca-71bc7d1439c5 00:08:12.175 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:12.448 c8e0ee34-9bcb-4e8e-bfdc-0aa7e7949013 00:08:12.448 03:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:12.707 /dev/nbd0 00:08:12.707 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:12.707 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:12.707 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:12.707 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:12.707 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:12.707 mke2fs 1.47.0 (5-Feb-2023) 00:08:12.707 Discarding device blocks: 0/4096 done 00:08:12.707 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:12.707 00:08:12.707 Allocating group tables: 0/1 done 00:08:12.707 Writing inode tables: 0/1 done 00:08:12.707 Creating journal (1024 blocks): done 00:08:12.707 Writing superblocks and filesystem accounting information: 0/1 done 00:08:12.707 00:08:12.707 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:12.707 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.707 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:12.707 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:12.707 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:12.707 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.707 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61243 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61243 ']' 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61243 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61243 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:12.966 killing process with pid 61243 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61243' 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61243 00:08:12.966 03:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61243 00:08:14.345 03:18:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:14.345 00:08:14.345 real 0m11.919s 00:08:14.345 user 0m15.331s 00:08:14.345 sys 0m5.073s 00:08:14.345 03:18:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.345 03:18:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:14.345 ************************************ 00:08:14.345 END TEST bdev_nbd 00:08:14.345 ************************************ 00:08:14.345 03:18:37 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:14.345 03:18:37 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:08:14.345 skipping fio tests on NVMe due to multi-ns failures. 00:08:14.345 03:18:37 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:14.345 03:18:37 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:14.345 03:18:37 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:14.345 03:18:37 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:08:14.345 03:18:37 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.345 03:18:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.345 ************************************ 00:08:14.345 START TEST bdev_verify 00:08:14.345 ************************************ 00:08:14.345 03:18:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:14.345 [2024-11-05 03:18:37.861540] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:08:14.345 [2024-11-05 03:18:37.861700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61632 ] 00:08:14.604 [2024-11-05 03:18:38.052405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:14.863 [2024-11-05 03:18:38.204991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.863 [2024-11-05 03:18:38.205028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.431 Running I/O for 5 seconds... 00:08:17.746 16896.00 IOPS, 66.00 MiB/s [2024-11-05T03:18:42.271Z] 18432.00 IOPS, 72.00 MiB/s [2024-11-05T03:18:43.208Z] 19264.00 IOPS, 75.25 MiB/s [2024-11-05T03:18:44.145Z] 19504.00 IOPS, 76.19 MiB/s [2024-11-05T03:18:44.145Z] 19596.80 IOPS, 76.55 MiB/s 00:08:20.561 Latency(us) 00:08:20.561 [2024-11-05T03:18:44.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.561 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.561 Verification LBA range: start 0x0 length 0xbd0bd 00:08:20.561 Nvme0n1 : 5.04 1601.53 6.26 0.00 0.00 79615.23 16634.04 75800.67 00:08:20.561 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.561 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:20.561 Nvme0n1 : 5.06 1632.31 6.38 0.00 0.00 78088.28 9264.53 79169.59 00:08:20.561 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.561 Verification LBA range: start 0x0 length 0xa0000 00:08:20.561 Nvme1n1 : 5.06 1605.81 6.27 0.00 0.00 79237.99 6843.12 68220.61 00:08:20.561 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.561 Verification LBA range: start 0xa0000 length 0xa0000 00:08:20.561 Nvme1n1 : 5.06 1631.18 6.37 0.00 0.00 77976.57 11422.74 71168.41 00:08:20.561 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.561 Verification LBA range: start 0x0 length 0x80000 00:08:20.561 Nvme2n1 : 5.08 1613.46 6.30 0.00 0.00 78857.27 10475.23 62746.11 00:08:20.561 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.561 Verification LBA range: start 0x80000 length 0x80000 00:08:20.561 Nvme2n1 : 5.07 1639.60 6.40 0.00 0.00 77526.54 10212.04 61061.65 00:08:20.561 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.561 Verification LBA range: start 0x0 length 0x80000 00:08:20.561 Nvme2n2 : 5.08 1612.88 6.30 0.00 0.00 78768.87 10843.71 64009.46 00:08:20.561 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.561 Verification LBA range: start 0x80000 length 0x80000 00:08:20.561 Nvme2n2 : 5.08 1638.76 6.40 0.00 0.00 77398.70 11528.02 60640.54 00:08:20.561 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.561 Verification LBA range: start 0x0 length 0x80000 00:08:20.561 Nvme2n3 : 5.08 1611.93 6.30 0.00 0.00 78648.89 12422.89 67799.49 00:08:20.561 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.561 Verification LBA range: start 0x80000 length 0x80000 00:08:20.561 Nvme2n3 : 5.08 1638.17 6.40 0.00 0.00 77276.94 11580.66 61061.65 00:08:20.562 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.562 Verification LBA range: start 0x0 length 0x20000 00:08:20.562 Nvme3n1 : 5.08 1611.58 6.30 0.00 0.00 78512.23 12054.41 69483.95 00:08:20.562 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.562 Verification LBA range: start 0x20000 length 0x20000 00:08:20.562 Nvme3n1 : 5.08 1637.63 6.40 0.00 0.00 77173.79 11422.74 64430.57 00:08:20.562 [2024-11-05T03:18:44.146Z] =================================================================================================================== 00:08:20.562 [2024-11-05T03:18:44.146Z] Total : 19474.84 76.07 0.00 0.00 78249.35 6843.12 79169.59 00:08:21.941 00:08:21.941 real 0m7.749s 00:08:21.941 user 0m14.141s 00:08:21.941 sys 0m0.415s 00:08:21.941 03:18:45 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:21.941 ************************************ 00:08:21.941 END TEST bdev_verify 00:08:21.941 ************************************ 00:08:21.941 03:18:45 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:22.200 03:18:45 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:22.200 03:18:45 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:08:22.200 03:18:45 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.200 03:18:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.200 ************************************ 00:08:22.200 START TEST bdev_verify_big_io 00:08:22.200 ************************************ 00:08:22.200 03:18:45 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:22.200 [2024-11-05 03:18:45.669639] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:08:22.200 [2024-11-05 03:18:45.669773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61736 ] 00:08:22.458 [2024-11-05 03:18:45.858125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:22.459 [2024-11-05 03:18:46.001762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.459 [2024-11-05 03:18:46.001768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.395 Running I/O for 5 seconds... 00:08:27.308 1893.00 IOPS, 118.31 MiB/s [2024-11-05T03:18:51.827Z] 2634.50 IOPS, 164.66 MiB/s [2024-11-05T03:18:52.764Z] 2310.33 IOPS, 144.40 MiB/s [2024-11-05T03:18:52.764Z] 2445.50 IOPS, 152.84 MiB/s 00:08:29.180 Latency(us) 00:08:29.180 [2024-11-05T03:18:52.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.180 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.180 Verification LBA range: start 0x0 length 0xbd0b 00:08:29.180 Nvme0n1 : 5.63 159.03 9.94 0.00 0.00 788863.76 26319.68 835491.88 00:08:29.180 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.180 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:29.180 Nvme0n1 : 5.64 157.68 9.85 0.00 0.00 792500.51 21476.86 859074.31 00:08:29.180 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.180 Verification LBA range: start 0x0 length 0xa000 00:08:29.180 Nvme1n1 : 5.63 155.62 9.73 0.00 0.00 777559.02 73273.99 714210.80 00:08:29.180 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.180 Verification LBA range: start 0xa000 length 0xa000 00:08:29.180 Nvme1n1 : 5.64 155.50 9.72 0.00 0.00 777989.78 53902.70 700735.13 00:08:29.180 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.180 Verification LBA range: start 0x0 length 0x8000 00:08:29.180 Nvme2n1 : 5.63 159.10 9.94 0.00 0.00 747509.66 62325.00 727686.48 00:08:29.180 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.180 Verification LBA range: start 0x8000 length 0x8000 00:08:29.180 Nvme2n1 : 5.64 158.80 9.93 0.00 0.00 747333.88 66536.15 700735.13 00:08:29.180 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.180 Verification LBA range: start 0x0 length 0x8000 00:08:29.180 Nvme2n2 : 5.65 162.24 10.14 0.00 0.00 715333.56 16949.87 744531.07 00:08:29.180 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.180 Verification LBA range: start 0x8000 length 0x8000 00:08:29.180 Nvme2n2 : 5.65 158.70 9.92 0.00 0.00 727389.91 68220.61 704104.04 00:08:29.180 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.180 Verification LBA range: start 0x0 length 0x8000 00:08:29.180 Nvme2n3 : 5.69 168.67 10.54 0.00 0.00 671510.24 28846.37 808540.53 00:08:29.180 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.180 Verification LBA range: start 0x8000 length 0x8000 00:08:29.180 Nvme2n3 : 5.69 168.74 10.55 0.00 0.00 669151.68 21161.02 714210.80 00:08:29.180 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.180 Verification LBA range: start 0x0 length 0x2000 00:08:29.180 Nvme3n1 : 5.75 188.75 11.80 0.00 0.00 586821.82 1138.33 818647.29 00:08:29.180 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.180 Verification LBA range: start 0x2000 length 0x2000 00:08:29.180 Nvme3n1 : 5.76 181.90 11.37 0.00 0.00 606455.30 1059.37 1522751.33 00:08:29.180 [2024-11-05T03:18:52.764Z] =================================================================================================================== 00:08:29.180 [2024-11-05T03:18:52.764Z] Total : 1974.73 123.42 0.00 0.00 712854.92 1059.37 1522751.33 00:08:31.086 00:08:31.086 real 0m9.078s 00:08:31.086 user 0m16.794s 00:08:31.086 sys 0m0.444s 00:08:31.086 03:18:54 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:31.086 03:18:54 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:31.086 ************************************ 00:08:31.086 END TEST bdev_verify_big_io 00:08:31.086 ************************************ 00:08:31.345 03:18:54 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:31.345 03:18:54 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:31.345 03:18:54 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:31.345 03:18:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.345 ************************************ 00:08:31.345 START TEST bdev_write_zeroes 00:08:31.345 ************************************ 00:08:31.345 03:18:54 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:31.345 [2024-11-05 03:18:54.826067] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:08:31.345 [2024-11-05 03:18:54.826952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61856 ] 00:08:31.605 [2024-11-05 03:18:55.011965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.605 [2024-11-05 03:18:55.145602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.541 Running I/O for 1 seconds... 00:08:33.475 79104.00 IOPS, 309.00 MiB/s 00:08:33.475 Latency(us) 00:08:33.475 [2024-11-05T03:18:57.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.475 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:33.475 Nvme0n1 : 1.02 13135.59 51.31 0.00 0.00 9725.75 5448.17 18634.33 00:08:33.475 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:33.475 Nvme1n1 : 1.02 13122.98 51.26 0.00 0.00 9723.59 8738.13 18318.50 00:08:33.475 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:33.475 Nvme2n1 : 1.02 13110.99 51.21 0.00 0.00 9711.14 8632.85 17897.38 00:08:33.475 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:33.475 Nvme2n2 : 1.02 13099.30 51.17 0.00 0.00 9667.58 6132.49 17265.71 00:08:33.475 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:33.475 Nvme2n3 : 1.02 13087.18 51.12 0.00 0.00 9664.15 6027.21 17370.99 00:08:33.475 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:33.475 Nvme3n1 : 1.02 13075.31 51.08 0.00 0.00 9659.83 6211.44 18844.89 00:08:33.475 [2024-11-05T03:18:57.059Z] =================================================================================================================== 00:08:33.475 [2024-11-05T03:18:57.059Z] Total : 78631.35 307.15 0.00 0.00 9692.01 5448.17 18844.89 00:08:34.885 00:08:34.885 real 0m3.428s 00:08:34.885 user 0m2.951s 00:08:34.885 sys 0m0.363s 00:08:34.885 03:18:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:34.885 03:18:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:34.885 ************************************ 00:08:34.885 END TEST bdev_write_zeroes 00:08:34.885 ************************************ 00:08:34.885 03:18:58 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:34.885 03:18:58 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:34.885 03:18:58 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:34.885 03:18:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.885 ************************************ 00:08:34.885 START TEST bdev_json_nonenclosed 00:08:34.885 ************************************ 00:08:34.885 03:18:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:34.885 [2024-11-05 03:18:58.318570] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:08:34.885 [2024-11-05 03:18:58.318722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61909 ] 00:08:35.145 [2024-11-05 03:18:58.504914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.145 [2024-11-05 03:18:58.639127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.145 [2024-11-05 03:18:58.639255] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:35.145 [2024-11-05 03:18:58.639282] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:35.145 [2024-11-05 03:18:58.639315] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.404 00:08:35.404 real 0m0.696s 00:08:35.404 user 0m0.411s 00:08:35.404 sys 0m0.180s 00:08:35.404 03:18:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:35.404 03:18:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:35.404 ************************************ 00:08:35.404 END TEST bdev_json_nonenclosed 00:08:35.404 ************************************ 00:08:35.404 03:18:58 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:35.404 03:18:58 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:35.404 03:18:58 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:35.404 03:18:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:35.404 ************************************ 00:08:35.404 START TEST bdev_json_nonarray 00:08:35.404 ************************************ 00:08:35.404 03:18:58 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:35.664 [2024-11-05 03:18:59.083975] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:08:35.664 [2024-11-05 03:18:59.084106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61940 ] 00:08:35.923 [2024-11-05 03:18:59.269814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.923 [2024-11-05 03:18:59.391165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.923 [2024-11-05 03:18:59.391284] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:35.923 [2024-11-05 03:18:59.391333] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:35.923 [2024-11-05 03:18:59.391345] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.183 00:08:36.183 real 0m0.673s 00:08:36.183 user 0m0.413s 00:08:36.183 sys 0m0.155s 00:08:36.183 03:18:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:36.183 03:18:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:36.183 ************************************ 00:08:36.183 END TEST bdev_json_nonarray 00:08:36.183 ************************************ 00:08:36.183 03:18:59 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:08:36.183 03:18:59 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:08:36.183 03:18:59 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:08:36.183 03:18:59 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:36.183 03:18:59 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:08:36.183 03:18:59 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:36.183 03:18:59 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:36.183 03:18:59 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:36.183 03:18:59 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:36.183 03:18:59 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:36.183 03:18:59 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:36.183 00:08:36.183 real 0m44.126s 00:08:36.183 user 1m4.345s 00:08:36.183 sys 0m8.690s 00:08:36.183 03:18:59 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:36.183 03:18:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.183 ************************************ 00:08:36.183 END TEST blockdev_nvme 00:08:36.183 ************************************ 00:08:36.443 03:18:59 -- spdk/autotest.sh@209 -- # uname -s 00:08:36.443 03:18:59 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:36.443 03:18:59 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:36.443 03:18:59 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:36.443 03:18:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:36.443 03:18:59 -- common/autotest_common.sh@10 -- # set +x 00:08:36.443 ************************************ 00:08:36.443 START TEST blockdev_nvme_gpt 00:08:36.443 ************************************ 00:08:36.443 03:18:59 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:36.443 * Looking for test storage... 00:08:36.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:36.443 03:18:59 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:36.443 03:18:59 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:08:36.443 03:18:59 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:36.443 03:18:59 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.443 03:18:59 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:36.443 03:19:00 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:36.443 03:19:00 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.443 03:19:00 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:36.443 03:19:00 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.443 03:19:00 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.443 03:19:00 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.443 03:19:00 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:36.443 03:19:00 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.443 03:19:00 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:36.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.443 --rc genhtml_branch_coverage=1 00:08:36.443 --rc genhtml_function_coverage=1 00:08:36.443 --rc genhtml_legend=1 00:08:36.443 --rc geninfo_all_blocks=1 00:08:36.443 --rc geninfo_unexecuted_blocks=1 00:08:36.443 00:08:36.443 ' 00:08:36.443 03:19:00 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:36.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.443 --rc genhtml_branch_coverage=1 00:08:36.443 --rc genhtml_function_coverage=1 00:08:36.443 --rc genhtml_legend=1 00:08:36.443 --rc geninfo_all_blocks=1 00:08:36.443 --rc geninfo_unexecuted_blocks=1 00:08:36.443 00:08:36.443 ' 00:08:36.443 03:19:00 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:36.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.443 --rc genhtml_branch_coverage=1 00:08:36.443 --rc genhtml_function_coverage=1 00:08:36.443 --rc genhtml_legend=1 00:08:36.443 --rc geninfo_all_blocks=1 00:08:36.443 --rc geninfo_unexecuted_blocks=1 00:08:36.443 00:08:36.443 ' 00:08:36.443 03:19:00 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:36.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.443 --rc genhtml_branch_coverage=1 00:08:36.443 --rc genhtml_function_coverage=1 00:08:36.443 --rc genhtml_legend=1 00:08:36.443 --rc geninfo_all_blocks=1 00:08:36.443 --rc geninfo_unexecuted_blocks=1 00:08:36.443 00:08:36.443 ' 00:08:36.443 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:36.443 03:19:00 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:36.443 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:36.443 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:36.443 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:36.443 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:36.443 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:36.443 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:36.443 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:36.443 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:36.443 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:36.443 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:36.443 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62024 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:36.703 03:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62024 00:08:36.703 03:19:00 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 62024 ']' 00:08:36.703 03:19:00 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.703 03:19:00 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.703 03:19:00 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.703 03:19:00 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.703 03:19:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.703 [2024-11-05 03:19:00.147728] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:08:36.703 [2024-11-05 03:19:00.148069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62024 ] 00:08:36.962 [2024-11-05 03:19:00.333082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.962 [2024-11-05 03:19:00.469192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.340 03:19:01 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:38.340 03:19:01 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:08:38.340 03:19:01 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:38.340 03:19:01 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:08:38.340 03:19:01 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:38.599 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:38.858 Waiting for block devices as requested 00:08:38.858 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.117 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.117 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.117 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:44.393 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:44.393 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:44.393 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:44.393 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:44.393 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:44.393 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:44.393 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:44.393 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:44.394 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:44.394 BYT; 00:08:44.394 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:44.394 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:44.394 BYT; 00:08:44.394 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:44.394 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:44.394 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:44.394 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:44.394 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:44.394 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:44.394 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:44.394 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:44.394 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:44.394 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:44.394 03:19:07 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:44.394 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:44.394 03:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:45.773 The operation has completed successfully. 00:08:45.773 03:19:08 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:46.710 The operation has completed successfully. 00:08:46.710 03:19:09 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:47.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:47.847 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.847 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:48.106 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:48.106 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:48.106 03:19:11 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:48.106 03:19:11 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.106 03:19:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.106 [] 00:08:48.106 03:19:11 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.106 03:19:11 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:48.106 03:19:11 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:48.106 03:19:11 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:48.106 03:19:11 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:48.365 03:19:11 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:48.366 03:19:11 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.366 03:19:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.624 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.624 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:08:48.624 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.624 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.624 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.624 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:48.624 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.624 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.624 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:48.884 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.884 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:48.884 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:48.885 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9d349079-bc40-4f3b-9da2-8f0293d26745"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9d349079-bc40-4f3b-9da2-8f0293d26745",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8de7033e-1e67-41fe-98a4-8737ccc3c50f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8de7033e-1e67-41fe-98a4-8737ccc3c50f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "7849fa57-36e8-4543-b194-3cc1cd6c72f7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7849fa57-36e8-4543-b194-3cc1cd6c72f7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "fa45164f-3b9b-480f-91ab-4e1e311d2995"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fa45164f-3b9b-480f-91ab-4e1e311d2995",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "6292b12d-1fea-467b-8f91-fd8afc7d12e6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6292b12d-1fea-467b-8f91-fd8afc7d12e6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:48.885 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:48.885 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:48.885 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:48.885 03:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62024 00:08:48.885 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 62024 ']' 00:08:48.885 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 62024 00:08:48.885 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:08:48.885 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:48.885 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62024 00:08:48.885 killing process with pid 62024 00:08:48.885 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:48.885 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:48.885 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62024' 00:08:48.885 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 62024 00:08:48.885 03:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 62024 00:08:51.427 03:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:51.427 03:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:51.427 03:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:08:51.427 03:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:51.427 03:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:51.427 ************************************ 00:08:51.427 START TEST bdev_hello_world 00:08:51.427 ************************************ 00:08:51.427 03:19:14 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:51.427 [2024-11-05 03:19:14.954734] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:08:51.427 [2024-11-05 03:19:14.954864] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62669 ] 00:08:51.692 [2024-11-05 03:19:15.137963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.692 [2024-11-05 03:19:15.272961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.629 [2024-11-05 03:19:15.970249] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:52.629 [2024-11-05 03:19:15.970514] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:52.629 [2024-11-05 03:19:15.970554] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:52.629 [2024-11-05 03:19:15.973777] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:52.629 [2024-11-05 03:19:15.974407] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:52.629 [2024-11-05 03:19:15.974443] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:52.629 [2024-11-05 03:19:15.974764] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:52.629 00:08:52.629 [2024-11-05 03:19:15.974798] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:53.567 00:08:53.567 real 0m2.277s 00:08:53.567 user 0m1.827s 00:08:53.567 sys 0m0.339s 00:08:53.567 ************************************ 00:08:53.567 END TEST bdev_hello_world 00:08:53.567 ************************************ 00:08:53.567 03:19:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:53.567 03:19:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:53.825 03:19:17 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:53.826 03:19:17 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:53.826 03:19:17 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:53.826 03:19:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:53.826 ************************************ 00:08:53.826 START TEST bdev_bounds 00:08:53.826 ************************************ 00:08:53.826 03:19:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:08:53.826 03:19:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62716 00:08:53.826 03:19:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:53.826 03:19:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:53.826 03:19:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62716' 00:08:53.826 Process bdevio pid: 62716 00:08:53.826 03:19:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62716 00:08:53.826 03:19:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 62716 ']' 00:08:53.826 03:19:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.826 03:19:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:53.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.826 03:19:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.826 03:19:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:53.826 03:19:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:53.826 [2024-11-05 03:19:17.298969] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:08:53.826 [2024-11-05 03:19:17.299106] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62716 ] 00:08:54.085 [2024-11-05 03:19:17.483045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.085 [2024-11-05 03:19:17.614774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.085 [2024-11-05 03:19:17.615019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.085 [2024-11-05 03:19:17.615021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.022 03:19:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:55.022 03:19:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:08:55.022 03:19:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:55.022 I/O targets: 00:08:55.022 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:55.022 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:55.022 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:55.022 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:55.022 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:55.022 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:55.022 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:55.022 00:08:55.022 00:08:55.022 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.022 http://cunit.sourceforge.net/ 00:08:55.022 00:08:55.022 00:08:55.022 Suite: bdevio tests on: Nvme3n1 00:08:55.022 Test: blockdev write read block ...passed 00:08:55.022 Test: blockdev write zeroes read block ...passed 00:08:55.022 Test: blockdev write zeroes read no split ...passed 00:08:55.022 Test: blockdev write zeroes read split ...passed 00:08:55.022 Test: blockdev write zeroes read split partial ...passed 00:08:55.022 Test: blockdev reset ...[2024-11-05 03:19:18.539540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:55.022 [2024-11-05 03:19:18.543910] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:08:55.022 Test: blockdev write read 8 blocks ...uccessful. 00:08:55.022 passed 00:08:55.022 Test: blockdev write read size > 128k ...passed 00:08:55.022 Test: blockdev write read invalid size ...passed 00:08:55.022 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.022 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.022 Test: blockdev write read max offset ...passed 00:08:55.022 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.022 Test: blockdev writev readv 8 blocks ...passed 00:08:55.022 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.022 Test: blockdev writev readv block ...passed 00:08:55.022 Test: blockdev writev readv size > 128k ...passed 00:08:55.022 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.022 Test: blockdev comparev and writev ...[2024-11-05 03:19:18.554170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd604000 len:0x1000 00:08:55.022 [2024-11-05 03:19:18.554230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.022 passed 00:08:55.022 Test: blockdev nvme passthru rw ...passed 00:08:55.022 Test: blockdev nvme passthru vendor specific ...passed 00:08:55.022 Test: blockdev nvme admin passthru ...[2024-11-05 03:19:18.555226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.022 [2024-11-05 03:19:18.555266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.022 passed 00:08:55.022 Test: blockdev copy ...passed 00:08:55.022 Suite: bdevio tests on: Nvme2n3 00:08:55.022 Test: blockdev write read block ...passed 00:08:55.022 Test: blockdev write zeroes read block ...passed 00:08:55.022 Test: blockdev write zeroes read no split ...passed 00:08:55.022 Test: blockdev write zeroes read split ...passed 00:08:55.282 Test: blockdev write zeroes read split partial ...passed 00:08:55.282 Test: blockdev reset ...[2024-11-05 03:19:18.633737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:55.282 [2024-11-05 03:19:18.638334] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:08:55.282 00:08:55.282 Test: blockdev write read 8 blocks ...passed 00:08:55.282 Test: blockdev write read size > 128k ...passed 00:08:55.282 Test: blockdev write read invalid size ...passed 00:08:55.282 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.282 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.282 Test: blockdev write read max offset ...passed 00:08:55.282 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.282 Test: blockdev writev readv 8 blocks ...passed 00:08:55.282 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.282 Test: blockdev writev readv block ...passed 00:08:55.282 Test: blockdev writev readv size > 128k ...passed 00:08:55.282 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.282 Test: blockdev comparev and writev ...[2024-11-05 03:19:18.649410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd602000 len:0x1000 00:08:55.282 [2024-11-05 03:19:18.649608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.282 passed 00:08:55.282 Test: blockdev nvme passthru rw ...passed 00:08:55.282 Test: blockdev nvme passthru vendor specific ...[2024-11-05 03:19:18.650983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.282 [2024-11-05 03:19:18.651141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:08:55.282 00:08:55.282 Test: blockdev nvme admin passthru ...passed 00:08:55.282 Test: blockdev copy ...passed 00:08:55.282 Suite: bdevio tests on: Nvme2n2 00:08:55.282 Test: blockdev write read block ...passed 00:08:55.282 Test: blockdev write zeroes read block ...passed 00:08:55.282 Test: blockdev write zeroes read no split ...passed 00:08:55.282 Test: blockdev write zeroes read split ...passed 00:08:55.282 Test: blockdev write zeroes read split partial ...passed 00:08:55.282 Test: blockdev reset ...[2024-11-05 03:19:18.731123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:55.282 [2024-11-05 03:19:18.735976] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:55.282 Test: blockdev write read 8 blocks ...uccessful. 00:08:55.282 passed 00:08:55.282 Test: blockdev write read size > 128k ...passed 00:08:55.282 Test: blockdev write read invalid size ...passed 00:08:55.282 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.282 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.282 Test: blockdev write read max offset ...passed 00:08:55.282 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.282 Test: blockdev writev readv 8 blocks ...passed 00:08:55.282 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.282 Test: blockdev writev readv block ...passed 00:08:55.282 Test: blockdev writev readv size > 128k ...passed 00:08:55.282 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.282 Test: blockdev comparev and writev ...[2024-11-05 03:19:18.746059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfc38000 len:0x1000 00:08:55.282 [2024-11-05 03:19:18.746112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.282 passed 00:08:55.282 Test: blockdev nvme passthru rw ...passed 00:08:55.282 Test: blockdev nvme passthru vendor specific ...passed 00:08:55.282 Test: blockdev nvme admin passthru ...[2024-11-05 03:19:18.747151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.282 [2024-11-05 03:19:18.747188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.282 passed 00:08:55.282 Test: blockdev copy ...passed 00:08:55.282 Suite: bdevio tests on: Nvme2n1 00:08:55.282 Test: blockdev write read block ...passed 00:08:55.282 Test: blockdev write zeroes read block ...passed 00:08:55.282 Test: blockdev write zeroes read no split ...passed 00:08:55.282 Test: blockdev write zeroes read split ...passed 00:08:55.282 Test: blockdev write zeroes read split partial ...passed 00:08:55.282 Test: blockdev reset ...[2024-11-05 03:19:18.824490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:55.282 [2024-11-05 03:19:18.829271] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:55.282 Test: blockdev write read 8 blocks ...uccessful. 00:08:55.282 passed 00:08:55.282 Test: blockdev write read size > 128k ...passed 00:08:55.282 Test: blockdev write read invalid size ...passed 00:08:55.282 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.282 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.282 Test: blockdev write read max offset ...passed 00:08:55.282 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.282 Test: blockdev writev readv 8 blocks ...passed 00:08:55.282 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.282 Test: blockdev writev readv block ...passed 00:08:55.282 Test: blockdev writev readv size > 128k ...passed 00:08:55.282 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.282 Test: blockdev comparev and writev ...[2024-11-05 03:19:18.840627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfc34000 len:0x1000 00:08:55.282 [2024-11-05 03:19:18.840814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.282 passed 00:08:55.282 Test: blockdev nvme passthru rw ...passed 00:08:55.282 Test: blockdev nvme passthru vendor specific ...[2024-11-05 03:19:18.842222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.282 [2024-11-05 03:19:18.842370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.282 passed 00:08:55.282 Test: blockdev nvme admin passthru ...passed 00:08:55.282 Test: blockdev copy ...passed 00:08:55.282 Suite: bdevio tests on: Nvme1n1p2 00:08:55.282 Test: blockdev write read block ...passed 00:08:55.282 Test: blockdev write zeroes read block ...passed 00:08:55.282 Test: blockdev write zeroes read no split ...passed 00:08:55.542 Test: blockdev write zeroes read split ...passed 00:08:55.542 Test: blockdev write zeroes read split partial ...passed 00:08:55.542 Test: blockdev reset ...[2024-11-05 03:19:18.921592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:55.542 passed 00:08:55.542 Test: blockdev write read 8 blocks ...[2024-11-05 03:19:18.925876] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:55.542 passed 00:08:55.542 Test: blockdev write read size > 128k ...passed 00:08:55.542 Test: blockdev write read invalid size ...passed 00:08:55.542 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.542 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.542 Test: blockdev write read max offset ...passed 00:08:55.542 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.542 Test: blockdev writev readv 8 blocks ...passed 00:08:55.542 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.542 Test: blockdev writev readv block ...passed 00:08:55.542 Test: blockdev writev readv size > 128k ...passed 00:08:55.542 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.542 Test: blockdev comparev and writev ...[2024-11-05 03:19:18.936186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:08:55.542 Test: blockdev nvme passthru rw ...passed 00:08:55.542 Test: blockdev nvme passthru vendor specific ...passed 00:08:55.542 Test: blockdev nvme admin passthru ...passed 00:08:55.542 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2cfc30000 len:0x1000 00:08:55.542 [2024-11-05 03:19:18.936368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.542 passed 00:08:55.542 Suite: bdevio tests on: Nvme1n1p1 00:08:55.542 Test: blockdev write read block ...passed 00:08:55.542 Test: blockdev write zeroes read block ...passed 00:08:55.542 Test: blockdev write zeroes read no split ...passed 00:08:55.542 Test: blockdev write zeroes read split ...passed 00:08:55.542 Test: blockdev write zeroes read split partial ...passed 00:08:55.542 Test: blockdev reset ...[2024-11-05 03:19:19.024449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:55.542 [2024-11-05 03:19:19.028885] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:08:55.542 Test: blockdev write read 8 blocks ...uccessful. 00:08:55.542 passed 00:08:55.542 Test: blockdev write read size > 128k ...passed 00:08:55.542 Test: blockdev write read invalid size ...passed 00:08:55.542 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.542 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.542 Test: blockdev write read max offset ...passed 00:08:55.542 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.542 Test: blockdev writev readv 8 blocks ...passed 00:08:55.542 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.542 Test: blockdev writev readv block ...passed 00:08:55.542 Test: blockdev writev readv size > 128k ...passed 00:08:55.542 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.542 Test: blockdev comparev and writev ...[2024-11-05 03:19:19.038694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:passed 00:08:55.542 Test: blockdev nvme passthru rw ...passed 00:08:55.542 Test: blockdev nvme passthru vendor specific ...passed 00:08:55.542 Test: blockdev nvme admin passthru ...passed 00:08:55.542 Test: blockdev copy ...1 SGL DATA BLOCK ADDRESS 0x2bd80e000 len:0x1000 00:08:55.542 [2024-11-05 03:19:19.038862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.542 passed 00:08:55.542 Suite: bdevio tests on: Nvme0n1 00:08:55.542 Test: blockdev write read block ...passed 00:08:55.542 Test: blockdev write zeroes read block ...passed 00:08:55.542 Test: blockdev write zeroes read no split ...passed 00:08:55.542 Test: blockdev write zeroes read split ...passed 00:08:55.542 Test: blockdev write zeroes read split partial ...passed 00:08:55.542 Test: blockdev reset ...[2024-11-05 03:19:19.108493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:55.542 [2024-11-05 03:19:19.112730] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spasseduccessful. 00:08:55.542 00:08:55.542 Test: blockdev write read 8 blocks ...passed 00:08:55.542 Test: blockdev write read size > 128k ...passed 00:08:55.542 Test: blockdev write read invalid size ...passed 00:08:55.542 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.542 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.542 Test: blockdev write read max offset ...passed 00:08:55.542 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.542 Test: blockdev writev readv 8 blocks ...passed 00:08:55.542 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.542 Test: blockdev writev readv block ...passed 00:08:55.542 Test: blockdev writev readv size > 128k ...passed 00:08:55.542 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.542 Test: blockdev comparev and writev ...passed 00:08:55.542 Test: blockdev nvme passthru rw ...[2024-11-05 03:19:19.121794] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:55.542 separate metadata which is not supported yet. 00:08:55.542 passed 00:08:55.542 Test: blockdev nvme passthru vendor specific ...passed 00:08:55.543 Test: blockdev nvme admin passthru ...[2024-11-05 03:19:19.122463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:55.543 [2024-11-05 03:19:19.122513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:55.802 passed 00:08:55.802 Test: blockdev copy ...passed 00:08:55.802 00:08:55.802 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.802 suites 7 7 n/a 0 0 00:08:55.802 tests 161 161 161 0 0 00:08:55.802 asserts 1025 1025 1025 0 n/a 00:08:55.802 00:08:55.802 Elapsed time = 1.781 seconds 00:08:55.802 0 00:08:55.802 03:19:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62716 00:08:55.802 03:19:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 62716 ']' 00:08:55.802 03:19:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 62716 00:08:55.802 03:19:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:08:55.802 03:19:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:55.802 03:19:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62716 00:08:55.802 03:19:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:55.802 03:19:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:55.802 03:19:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62716' 00:08:55.802 killing process with pid 62716 00:08:55.802 03:19:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 62716 00:08:55.802 03:19:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 62716 00:08:56.739 03:19:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:56.739 00:08:56.739 real 0m3.114s 00:08:56.739 user 0m7.898s 00:08:56.739 sys 0m0.511s 00:08:56.739 03:19:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:56.739 03:19:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:56.739 ************************************ 00:08:56.739 END TEST bdev_bounds 00:08:56.739 ************************************ 00:08:56.998 03:19:20 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:56.998 03:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:56.998 03:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:56.998 03:19:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:56.998 ************************************ 00:08:56.998 START TEST bdev_nbd 00:08:56.998 ************************************ 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62781 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62781 /var/tmp/spdk-nbd.sock 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 62781 ']' 00:08:56.998 03:19:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:56.999 03:19:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:56.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:56.999 03:19:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:56.999 03:19:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:56.999 03:19:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:56.999 [2024-11-05 03:19:20.497601] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:08:56.999 [2024-11-05 03:19:20.497745] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.258 [2024-11-05 03:19:20.679821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.258 [2024-11-05 03:19:20.819316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:58.196 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:58.455 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:58.455 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:58.455 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:58.455 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:58.455 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:58.455 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:58.455 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:58.455 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:58.455 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:58.455 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:58.455 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:58.455 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.455 1+0 records in 00:08:58.455 1+0 records out 00:08:58.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475421 s, 8.6 MB/s 00:08:58.455 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.456 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:58.456 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.456 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:58.456 03:19:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:58.456 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.456 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:58.456 03:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.715 1+0 records in 00:08:58.715 1+0 records out 00:08:58.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718303 s, 5.7 MB/s 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:58.715 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.975 1+0 records in 00:08:58.975 1+0 records out 00:08:58.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638225 s, 6.4 MB/s 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:58.975 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.234 1+0 records in 00:08:59.234 1+0 records out 00:08:59.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067825 s, 6.0 MB/s 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:59.234 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.494 1+0 records in 00:08:59.494 1+0 records out 00:08:59.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000774038 s, 5.3 MB/s 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:59.494 03:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.754 1+0 records in 00:08:59.754 1+0 records out 00:08:59.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742726 s, 5.5 MB/s 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:59.754 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.013 1+0 records in 00:09:00.013 1+0 records out 00:09:00.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000863872 s, 4.7 MB/s 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:00.013 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:00.272 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:00.272 { 00:09:00.272 "nbd_device": "/dev/nbd0", 00:09:00.272 "bdev_name": "Nvme0n1" 00:09:00.272 }, 00:09:00.272 { 00:09:00.272 "nbd_device": "/dev/nbd1", 00:09:00.272 "bdev_name": "Nvme1n1p1" 00:09:00.272 }, 00:09:00.272 { 00:09:00.272 "nbd_device": "/dev/nbd2", 00:09:00.272 "bdev_name": "Nvme1n1p2" 00:09:00.272 }, 00:09:00.272 { 00:09:00.272 "nbd_device": "/dev/nbd3", 00:09:00.272 "bdev_name": "Nvme2n1" 00:09:00.272 }, 00:09:00.272 { 00:09:00.273 "nbd_device": "/dev/nbd4", 00:09:00.273 "bdev_name": "Nvme2n2" 00:09:00.273 }, 00:09:00.273 { 00:09:00.273 "nbd_device": "/dev/nbd5", 00:09:00.273 "bdev_name": "Nvme2n3" 00:09:00.273 }, 00:09:00.273 { 00:09:00.273 "nbd_device": "/dev/nbd6", 00:09:00.273 "bdev_name": "Nvme3n1" 00:09:00.273 } 00:09:00.273 ]' 00:09:00.273 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:00.273 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:00.273 { 00:09:00.273 "nbd_device": "/dev/nbd0", 00:09:00.273 "bdev_name": "Nvme0n1" 00:09:00.273 }, 00:09:00.273 { 00:09:00.273 "nbd_device": "/dev/nbd1", 00:09:00.273 "bdev_name": "Nvme1n1p1" 00:09:00.273 }, 00:09:00.273 { 00:09:00.273 "nbd_device": "/dev/nbd2", 00:09:00.273 "bdev_name": "Nvme1n1p2" 00:09:00.273 }, 00:09:00.273 { 00:09:00.273 "nbd_device": "/dev/nbd3", 00:09:00.273 "bdev_name": "Nvme2n1" 00:09:00.273 }, 00:09:00.273 { 00:09:00.273 "nbd_device": "/dev/nbd4", 00:09:00.273 "bdev_name": "Nvme2n2" 00:09:00.273 }, 00:09:00.273 { 00:09:00.273 "nbd_device": "/dev/nbd5", 00:09:00.273 "bdev_name": "Nvme2n3" 00:09:00.273 }, 00:09:00.273 { 00:09:00.273 "nbd_device": "/dev/nbd6", 00:09:00.273 "bdev_name": "Nvme3n1" 00:09:00.273 } 00:09:00.273 ]' 00:09:00.273 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:00.273 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:00.273 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.273 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:00.273 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:00.273 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:00.273 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.273 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:00.533 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:00.533 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:00.533 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:00.533 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.533 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.533 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:00.533 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:00.533 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.533 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.533 03:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:00.533 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:00.533 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:00.533 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:00.533 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.533 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.533 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:00.792 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:00.792 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.792 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.792 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:00.792 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:00.792 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:00.792 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:00.792 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.792 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.792 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:00.792 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:00.792 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.792 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.792 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:01.051 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:01.051 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:01.051 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:01.051 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.051 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.051 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:01.051 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:01.051 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.051 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.051 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:01.310 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:01.310 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:01.310 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:01.310 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.310 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.310 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:01.310 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:01.310 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.310 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.310 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:01.570 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:01.570 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:01.570 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:01.570 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.570 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.570 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:01.570 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:01.570 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.570 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.570 03:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:01.829 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:01.829 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:01.829 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:01.829 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.829 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.829 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:01.829 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:01.829 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.829 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:01.829 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.829 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:02.089 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:02.349 /dev/nbd0 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.349 1+0 records in 00:09:02.349 1+0 records out 00:09:02.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677032 s, 6.0 MB/s 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:02.349 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:02.608 /dev/nbd1 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.608 1+0 records in 00:09:02.608 1+0 records out 00:09:02.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468782 s, 8.7 MB/s 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:02.608 03:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:02.867 /dev/nbd10 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.867 1+0 records in 00:09:02.867 1+0 records out 00:09:02.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0006975 s, 5.9 MB/s 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:02.867 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:03.126 /dev/nbd11 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:03.126 1+0 records in 00:09:03.126 1+0 records out 00:09:03.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105929 s, 3.9 MB/s 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:03.126 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:03.126 /dev/nbd12 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:03.385 1+0 records in 00:09:03.385 1+0 records out 00:09:03.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000957517 s, 4.3 MB/s 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:03.385 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:03.385 /dev/nbd13 00:09:03.644 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:03.644 03:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:03.644 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:09:03.644 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:03.644 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:03.644 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:03.644 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:09:03.644 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:03.644 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:03.644 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:03.644 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:03.644 1+0 records in 00:09:03.644 1+0 records out 00:09:03.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000665394 s, 6.2 MB/s 00:09:03.644 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.644 03:19:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:03.644 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.644 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:03.644 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:03.644 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:03.644 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:03.644 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:03.644 /dev/nbd14 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:03.903 1+0 records in 00:09:03.903 1+0 records out 00:09:03.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00094572 s, 4.3 MB/s 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.903 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:04.162 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:04.162 { 00:09:04.162 "nbd_device": "/dev/nbd0", 00:09:04.162 "bdev_name": "Nvme0n1" 00:09:04.162 }, 00:09:04.162 { 00:09:04.162 "nbd_device": "/dev/nbd1", 00:09:04.162 "bdev_name": "Nvme1n1p1" 00:09:04.162 }, 00:09:04.162 { 00:09:04.162 "nbd_device": "/dev/nbd10", 00:09:04.162 "bdev_name": "Nvme1n1p2" 00:09:04.162 }, 00:09:04.162 { 00:09:04.162 "nbd_device": "/dev/nbd11", 00:09:04.162 "bdev_name": "Nvme2n1" 00:09:04.162 }, 00:09:04.162 { 00:09:04.162 "nbd_device": "/dev/nbd12", 00:09:04.162 "bdev_name": "Nvme2n2" 00:09:04.162 }, 00:09:04.162 { 00:09:04.162 "nbd_device": "/dev/nbd13", 00:09:04.162 "bdev_name": "Nvme2n3" 00:09:04.162 }, 00:09:04.162 { 00:09:04.162 "nbd_device": "/dev/nbd14", 00:09:04.162 "bdev_name": "Nvme3n1" 00:09:04.162 } 00:09:04.162 ]' 00:09:04.162 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:04.162 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:04.162 { 00:09:04.162 "nbd_device": "/dev/nbd0", 00:09:04.162 "bdev_name": "Nvme0n1" 00:09:04.162 }, 00:09:04.162 { 00:09:04.162 "nbd_device": "/dev/nbd1", 00:09:04.162 "bdev_name": "Nvme1n1p1" 00:09:04.162 }, 00:09:04.162 { 00:09:04.162 "nbd_device": "/dev/nbd10", 00:09:04.162 "bdev_name": "Nvme1n1p2" 00:09:04.162 }, 00:09:04.162 { 00:09:04.162 "nbd_device": "/dev/nbd11", 00:09:04.162 "bdev_name": "Nvme2n1" 00:09:04.162 }, 00:09:04.162 { 00:09:04.162 "nbd_device": "/dev/nbd12", 00:09:04.162 "bdev_name": "Nvme2n2" 00:09:04.162 }, 00:09:04.162 { 00:09:04.162 "nbd_device": "/dev/nbd13", 00:09:04.162 "bdev_name": "Nvme2n3" 00:09:04.162 }, 00:09:04.162 { 00:09:04.162 "nbd_device": "/dev/nbd14", 00:09:04.162 "bdev_name": "Nvme3n1" 00:09:04.162 } 00:09:04.162 ]' 00:09:04.162 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:04.162 /dev/nbd1 00:09:04.162 /dev/nbd10 00:09:04.162 /dev/nbd11 00:09:04.162 /dev/nbd12 00:09:04.162 /dev/nbd13 00:09:04.162 /dev/nbd14' 00:09:04.162 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:04.162 /dev/nbd1 00:09:04.162 /dev/nbd10 00:09:04.162 /dev/nbd11 00:09:04.162 /dev/nbd12 00:09:04.162 /dev/nbd13 00:09:04.162 /dev/nbd14' 00:09:04.162 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:04.162 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:04.163 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:04.163 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:04.163 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:04.163 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:04.163 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:04.163 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:04.163 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:04.163 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:04.163 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:04.163 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:04.163 256+0 records in 00:09:04.163 256+0 records out 00:09:04.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00695933 s, 151 MB/s 00:09:04.163 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.163 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:04.163 256+0 records in 00:09:04.163 256+0 records out 00:09:04.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140932 s, 7.4 MB/s 00:09:04.163 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.163 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:04.421 256+0 records in 00:09:04.421 256+0 records out 00:09:04.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147309 s, 7.1 MB/s 00:09:04.421 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.421 03:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:04.421 256+0 records in 00:09:04.421 256+0 records out 00:09:04.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147142 s, 7.1 MB/s 00:09:04.421 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.421 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:04.680 256+0 records in 00:09:04.680 256+0 records out 00:09:04.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147424 s, 7.1 MB/s 00:09:04.680 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.680 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:04.939 256+0 records in 00:09:04.939 256+0 records out 00:09:04.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147561 s, 7.1 MB/s 00:09:04.939 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.939 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:04.939 256+0 records in 00:09:04.939 256+0 records out 00:09:04.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151748 s, 6.9 MB/s 00:09:04.939 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.939 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:05.199 256+0 records in 00:09:05.199 256+0 records out 00:09:05.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146623 s, 7.2 MB/s 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.199 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:05.459 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:05.459 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:05.459 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:05.459 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.459 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.459 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:05.459 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:05.459 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.459 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.459 03:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:05.718 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:05.718 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:05.718 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:05.718 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.718 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.718 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:05.718 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:05.718 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.718 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.718 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:05.977 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:05.977 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:05.977 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:05.977 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.977 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.977 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:05.977 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:05.977 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.977 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.977 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:06.244 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:06.244 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:06.244 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:06.244 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.244 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.244 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:06.244 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:06.244 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.244 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.244 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:06.503 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:06.503 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:06.503 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:06.503 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.503 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.503 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:06.503 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:06.503 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.503 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.503 03:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:06.762 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.021 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:07.021 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:07.021 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:07.021 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:07.280 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:07.280 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:07.280 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:07.280 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:07.280 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:07.280 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:07.280 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:07.280 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:07.280 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:07.280 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:07.280 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.280 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:07.280 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:07.280 malloc_lvol_verify 00:09:07.539 03:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:07.539 8df426af-2853-4367-96f5-a9bfd1d343b8 00:09:07.539 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:07.798 1eff30b6-4615-4619-bfcd-96431cbc46a8 00:09:07.798 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:08.057 /dev/nbd0 00:09:08.057 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:08.057 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:08.057 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:08.057 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:08.057 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:08.057 mke2fs 1.47.0 (5-Feb-2023) 00:09:08.057 Discarding device blocks: 0/4096 done 00:09:08.057 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:08.057 00:09:08.057 Allocating group tables: 0/1 done 00:09:08.057 Writing inode tables: 0/1 done 00:09:08.057 Creating journal (1024 blocks): done 00:09:08.057 Writing superblocks and filesystem accounting information: 0/1 done 00:09:08.057 00:09:08.057 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:08.057 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.057 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:08.057 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:08.057 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:08.057 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:08.057 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62781 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 62781 ']' 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 62781 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62781 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:08.315 killing process with pid 62781 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62781' 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 62781 00:09:08.315 03:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 62781 00:09:09.693 03:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:09.693 00:09:09.693 real 0m12.706s 00:09:09.693 user 0m16.148s 00:09:09.693 sys 0m5.492s 00:09:09.693 03:19:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:09.693 03:19:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:09.693 ************************************ 00:09:09.693 END TEST bdev_nbd 00:09:09.693 ************************************ 00:09:09.693 03:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:09.693 03:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:09:09.693 03:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:09:09.693 skipping fio tests on NVMe due to multi-ns failures. 00:09:09.693 03:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:09.693 03:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:09.693 03:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:09.693 03:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:09:09.693 03:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:09.693 03:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:09.693 ************************************ 00:09:09.693 START TEST bdev_verify 00:09:09.693 ************************************ 00:09:09.693 03:19:33 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:09.693 [2024-11-05 03:19:33.274057] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:09:09.693 [2024-11-05 03:19:33.274362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63213 ] 00:09:09.952 [2024-11-05 03:19:33.463643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:10.211 [2024-11-05 03:19:33.606364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.212 [2024-11-05 03:19:33.606392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.148 Running I/O for 5 seconds... 00:09:13.459 17472.00 IOPS, 68.25 MiB/s [2024-11-05T03:19:37.978Z] 17344.00 IOPS, 67.75 MiB/s [2024-11-05T03:19:38.914Z] 17109.33 IOPS, 66.83 MiB/s [2024-11-05T03:19:39.850Z] 17248.00 IOPS, 67.38 MiB/s [2024-11-05T03:19:39.850Z] 17100.80 IOPS, 66.80 MiB/s 00:09:16.266 Latency(us) 00:09:16.266 [2024-11-05T03:19:39.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.266 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:16.266 Verification LBA range: start 0x0 length 0xbd0bd 00:09:16.266 Nvme0n1 : 5.06 1314.40 5.13 0.00 0.00 97192.99 24529.94 84222.97 00:09:16.266 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:16.266 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:16.266 Nvme0n1 : 5.08 1108.72 4.33 0.00 0.00 115205.08 14739.02 97277.53 00:09:16.266 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:16.266 Verification LBA range: start 0x0 length 0x4ff80 00:09:16.266 Nvme1n1p1 : 5.07 1313.53 5.13 0.00 0.00 97108.15 24214.10 79590.71 00:09:16.266 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:16.266 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:16.266 Nvme1n1p1 : 5.08 1108.43 4.33 0.00 0.00 115050.57 14844.30 90118.58 00:09:16.267 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:16.267 Verification LBA range: start 0x0 length 0x4ff7f 00:09:16.267 Nvme1n1p2 : 5.07 1313.05 5.13 0.00 0.00 96874.73 24529.94 77485.13 00:09:16.267 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:16.267 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:16.267 Nvme1n1p2 : 5.08 1107.88 4.33 0.00 0.00 114715.31 15160.13 87170.78 00:09:16.267 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:16.267 Verification LBA range: start 0x0 length 0x80000 00:09:16.267 Nvme2n1 : 5.07 1312.62 5.13 0.00 0.00 96770.63 24108.83 77064.02 00:09:16.267 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:16.267 Verification LBA range: start 0x80000 length 0x80000 00:09:16.267 Nvme2n1 : 5.08 1107.64 4.33 0.00 0.00 114456.08 14739.02 86749.66 00:09:16.267 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:16.267 Verification LBA range: start 0x0 length 0x80000 00:09:16.267 Nvme2n2 : 5.07 1312.19 5.13 0.00 0.00 96660.27 23898.27 79169.59 00:09:16.267 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:16.267 Verification LBA range: start 0x80000 length 0x80000 00:09:16.267 Nvme2n2 : 5.09 1107.39 4.33 0.00 0.00 114228.32 14317.91 88434.12 00:09:16.267 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:16.267 Verification LBA range: start 0x0 length 0x80000 00:09:16.267 Nvme2n3 : 5.07 1311.75 5.12 0.00 0.00 96557.51 22424.37 81696.28 00:09:16.267 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:16.267 Verification LBA range: start 0x80000 length 0x80000 00:09:16.267 Nvme2n3 : 5.09 1107.15 4.32 0.00 0.00 114143.46 14002.07 86328.55 00:09:16.267 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:16.267 Verification LBA range: start 0x0 length 0x20000 00:09:16.267 Nvme3n1 : 5.08 1311.42 5.12 0.00 0.00 96476.81 19371.28 83801.86 00:09:16.267 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:16.267 Verification LBA range: start 0x20000 length 0x20000 00:09:16.267 Nvme3n1 : 5.09 1106.90 4.32 0.00 0.00 114025.41 13896.79 87591.89 00:09:16.267 [2024-11-05T03:19:39.851Z] =================================================================================================================== 00:09:16.267 [2024-11-05T03:19:39.851Z] Total : 16943.06 66.18 0.00 0.00 104936.91 13896.79 97277.53 00:09:17.645 00:09:17.645 real 0m7.860s 00:09:17.645 user 0m14.408s 00:09:17.645 sys 0m0.400s 00:09:17.645 ************************************ 00:09:17.645 END TEST bdev_verify 00:09:17.645 03:19:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:17.645 03:19:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:17.645 ************************************ 00:09:17.645 03:19:41 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:17.645 03:19:41 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:09:17.645 03:19:41 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:17.645 03:19:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:17.645 ************************************ 00:09:17.645 START TEST bdev_verify_big_io 00:09:17.645 ************************************ 00:09:17.645 03:19:41 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:17.645 [2024-11-05 03:19:41.213144] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:09:17.645 [2024-11-05 03:19:41.213257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63313 ] 00:09:17.904 [2024-11-05 03:19:41.396161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:18.163 [2024-11-05 03:19:41.537882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.163 [2024-11-05 03:19:41.537925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.102 Running I/O for 5 seconds... 00:09:24.461 2261.00 IOPS, 141.31 MiB/s [2024-11-05T03:19:48.304Z] 3303.50 IOPS, 206.47 MiB/s [2024-11-05T03:19:48.896Z] 3732.33 IOPS, 233.27 MiB/s 00:09:25.312 Latency(us) 00:09:25.312 [2024-11-05T03:19:48.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.312 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:25.312 Verification LBA range: start 0x0 length 0xbd0b 00:09:25.312 Nvme0n1 : 5.48 195.65 12.23 0.00 0.00 638719.53 24635.22 650201.34 00:09:25.312 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:25.312 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:25.312 Nvme0n1 : 5.67 86.18 5.39 0.00 0.00 1415432.28 17055.15 1549702.68 00:09:25.312 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:25.312 Verification LBA range: start 0x0 length 0x4ff8 00:09:25.312 Nvme1n1p1 : 5.48 198.47 12.40 0.00 0.00 621279.69 44217.06 653570.26 00:09:25.312 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:25.312 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:25.312 Nvme1n1p1 : 5.68 93.71 5.86 0.00 0.00 1240677.20 62325.00 1320616.20 00:09:25.312 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:25.312 Verification LBA range: start 0x0 length 0x4ff7 00:09:25.312 Nvme1n1p2 : 5.54 203.83 12.74 0.00 0.00 597935.86 21371.58 552502.70 00:09:25.312 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:25.312 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:25.312 Nvme1n1p2 : 5.80 106.85 6.68 0.00 0.00 1051600.56 41690.37 1320616.20 00:09:25.312 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:25.312 Verification LBA range: start 0x0 length 0x8000 00:09:25.312 Nvme2n1 : 5.55 203.42 12.71 0.00 0.00 589063.76 21792.69 616512.15 00:09:25.312 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:25.312 Verification LBA range: start 0x8000 length 0x8000 00:09:25.312 Nvme2n1 : 5.89 112.89 7.06 0.00 0.00 960900.04 26214.40 2344767.54 00:09:25.312 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:25.312 Verification LBA range: start 0x0 length 0x8000 00:09:25.312 Nvme2n2 : 5.55 202.83 12.68 0.00 0.00 580667.56 21897.97 613143.24 00:09:25.312 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:25.312 Verification LBA range: start 0x8000 length 0x8000 00:09:25.312 Nvme2n2 : 6.06 144.65 9.04 0.00 0.00 722063.49 24635.22 2371718.89 00:09:25.312 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:25.312 Verification LBA range: start 0x0 length 0x8000 00:09:25.312 Nvme2n3 : 5.55 207.54 12.97 0.00 0.00 561255.63 36426.44 626618.91 00:09:25.312 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:25.312 Verification LBA range: start 0x8000 length 0x8000 00:09:25.312 Nvme2n3 : 6.25 196.35 12.27 0.00 0.00 519885.78 10896.35 2183059.43 00:09:25.312 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:25.312 Verification LBA range: start 0x0 length 0x2000 00:09:25.312 Nvme3n1 : 5.56 218.67 13.67 0.00 0.00 526155.48 3342.60 640094.59 00:09:25.312 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:25.312 Verification LBA range: start 0x2000 length 0x2000 00:09:25.312 Nvme3n1 : 6.35 250.27 15.64 0.00 0.00 394909.61 401.38 2236962.13 00:09:25.312 [2024-11-05T03:19:48.896Z] =================================================================================================================== 00:09:25.312 [2024-11-05T03:19:48.896Z] Total : 2421.29 151.33 0.00 0.00 660089.63 401.38 2371718.89 00:09:27.223 00:09:27.223 real 0m9.289s 00:09:27.223 user 0m17.219s 00:09:27.223 sys 0m0.437s 00:09:27.223 03:19:50 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:27.223 03:19:50 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 ************************************ 00:09:27.223 END TEST bdev_verify_big_io 00:09:27.223 ************************************ 00:09:27.223 03:19:50 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:27.223 03:19:50 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:27.223 03:19:50 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:27.223 03:19:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 ************************************ 00:09:27.223 START TEST bdev_write_zeroes 00:09:27.223 ************************************ 00:09:27.223 03:19:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:27.223 [2024-11-05 03:19:50.561793] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:09:27.223 [2024-11-05 03:19:50.561910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63433 ] 00:09:27.223 [2024-11-05 03:19:50.742417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.482 [2024-11-05 03:19:50.874710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.051 Running I/O for 1 seconds... 00:09:29.426 64000.00 IOPS, 250.00 MiB/s 00:09:29.426 Latency(us) 00:09:29.426 [2024-11-05T03:19:53.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.426 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:29.426 Nvme0n1 : 1.02 9132.46 35.67 0.00 0.00 13985.89 12264.97 24951.06 00:09:29.426 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:29.426 Nvme1n1p1 : 1.02 9123.02 35.64 0.00 0.00 13980.40 12054.41 25793.29 00:09:29.426 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:29.426 Nvme1n1p2 : 1.03 9113.23 35.60 0.00 0.00 13966.31 11791.22 24424.66 00:09:29.426 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:29.426 Nvme2n1 : 1.03 9104.08 35.56 0.00 0.00 13953.77 12054.41 24108.83 00:09:29.426 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:29.426 Nvme2n2 : 1.03 9095.65 35.53 0.00 0.00 13863.66 9896.20 22740.20 00:09:29.426 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:29.426 Nvme2n3 : 1.03 9087.20 35.50 0.00 0.00 13834.77 8159.10 23477.15 00:09:29.427 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:29.427 Nvme3n1 : 1.03 9016.17 35.22 0.00 0.00 13917.00 9211.89 24951.06 00:09:29.427 [2024-11-05T03:19:53.011Z] =================================================================================================================== 00:09:29.427 [2024-11-05T03:19:53.011Z] Total : 63671.83 248.72 0.00 0.00 13928.84 8159.10 25793.29 00:09:30.364 00:09:30.364 real 0m3.439s 00:09:30.364 user 0m2.980s 00:09:30.364 sys 0m0.339s 00:09:30.364 03:19:53 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:30.364 03:19:53 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:30.364 ************************************ 00:09:30.364 END TEST bdev_write_zeroes 00:09:30.364 ************************************ 00:09:30.623 03:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:30.623 03:19:53 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:30.623 03:19:53 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:30.623 03:19:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:30.623 ************************************ 00:09:30.623 START TEST bdev_json_nonenclosed 00:09:30.623 ************************************ 00:09:30.623 03:19:53 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:30.623 [2024-11-05 03:19:54.087971] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:09:30.623 [2024-11-05 03:19:54.088121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63491 ] 00:09:30.882 [2024-11-05 03:19:54.265977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.882 [2024-11-05 03:19:54.409958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.882 [2024-11-05 03:19:54.410088] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:30.882 [2024-11-05 03:19:54.410115] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:30.882 [2024-11-05 03:19:54.410129] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:31.141 00:09:31.141 real 0m0.706s 00:09:31.141 user 0m0.445s 00:09:31.141 sys 0m0.156s 00:09:31.141 03:19:54 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:31.141 03:19:54 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:31.141 ************************************ 00:09:31.141 END TEST bdev_json_nonenclosed 00:09:31.141 ************************************ 00:09:31.400 03:19:54 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:31.400 03:19:54 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:31.400 03:19:54 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:31.400 03:19:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:31.400 ************************************ 00:09:31.400 START TEST bdev_json_nonarray 00:09:31.400 ************************************ 00:09:31.400 03:19:54 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:31.400 [2024-11-05 03:19:54.857653] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:09:31.400 [2024-11-05 03:19:54.857777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63517 ] 00:09:31.659 [2024-11-05 03:19:55.039040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.659 [2024-11-05 03:19:55.187698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.659 [2024-11-05 03:19:55.187833] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:31.659 [2024-11-05 03:19:55.187859] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:31.659 [2024-11-05 03:19:55.187872] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:31.918 00:09:31.918 real 0m0.712s 00:09:31.918 user 0m0.451s 00:09:31.918 sys 0m0.156s 00:09:31.918 03:19:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:31.918 03:19:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:31.918 ************************************ 00:09:31.918 END TEST bdev_json_nonarray 00:09:31.918 ************************************ 00:09:32.177 03:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:09:32.177 03:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:09:32.177 03:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:32.177 03:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:32.177 03:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:32.177 03:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:32.177 ************************************ 00:09:32.177 START TEST bdev_gpt_uuid 00:09:32.177 ************************************ 00:09:32.177 03:19:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:09:32.177 03:19:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:09:32.177 03:19:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:09:32.177 03:19:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63543 00:09:32.177 03:19:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:32.177 03:19:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:32.177 03:19:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63543 00:09:32.177 03:19:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 63543 ']' 00:09:32.177 03:19:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.177 03:19:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:32.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.177 03:19:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.177 03:19:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:32.177 03:19:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:32.177 [2024-11-05 03:19:55.671387] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:09:32.177 [2024-11-05 03:19:55.671524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63543 ] 00:09:32.435 [2024-11-05 03:19:55.858637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.435 [2024-11-05 03:19:55.991060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.814 03:19:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:33.814 03:19:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:09:33.814 03:19:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:33.814 03:19:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.814 03:19:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:33.814 Some configs were skipped because the RPC state that can call them passed over. 00:09:33.814 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.814 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:09:33.814 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.814 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:33.814 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.814 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:33.814 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.814 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:33.814 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.814 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:09:33.814 { 00:09:33.814 "name": "Nvme1n1p1", 00:09:33.814 "aliases": [ 00:09:33.814 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:33.814 ], 00:09:33.814 "product_name": "GPT Disk", 00:09:33.814 "block_size": 4096, 00:09:33.814 "num_blocks": 655104, 00:09:33.814 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:33.814 "assigned_rate_limits": { 00:09:33.814 "rw_ios_per_sec": 0, 00:09:33.814 "rw_mbytes_per_sec": 0, 00:09:33.814 "r_mbytes_per_sec": 0, 00:09:33.814 "w_mbytes_per_sec": 0 00:09:33.814 }, 00:09:33.814 "claimed": false, 00:09:33.814 "zoned": false, 00:09:33.814 "supported_io_types": { 00:09:33.814 "read": true, 00:09:33.814 "write": true, 00:09:33.814 "unmap": true, 00:09:33.814 "flush": true, 00:09:33.814 "reset": true, 00:09:33.814 "nvme_admin": false, 00:09:33.814 "nvme_io": false, 00:09:33.814 "nvme_io_md": false, 00:09:33.814 "write_zeroes": true, 00:09:33.814 "zcopy": false, 00:09:33.814 "get_zone_info": false, 00:09:33.814 "zone_management": false, 00:09:33.814 "zone_append": false, 00:09:33.814 "compare": true, 00:09:33.814 "compare_and_write": false, 00:09:33.814 "abort": true, 00:09:33.814 "seek_hole": false, 00:09:33.814 "seek_data": false, 00:09:33.814 "copy": true, 00:09:33.814 "nvme_iov_md": false 00:09:33.814 }, 00:09:33.814 "driver_specific": { 00:09:33.814 "gpt": { 00:09:33.814 "base_bdev": "Nvme1n1", 00:09:33.814 "offset_blocks": 256, 00:09:33.814 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:33.814 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:33.814 "partition_name": "SPDK_TEST_first" 00:09:33.814 } 00:09:33.814 } 00:09:33.814 } 00:09:33.814 ]' 00:09:33.814 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:09:34.073 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:09:34.074 { 00:09:34.074 "name": "Nvme1n1p2", 00:09:34.074 "aliases": [ 00:09:34.074 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:34.074 ], 00:09:34.074 "product_name": "GPT Disk", 00:09:34.074 "block_size": 4096, 00:09:34.074 "num_blocks": 655103, 00:09:34.074 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:34.074 "assigned_rate_limits": { 00:09:34.074 "rw_ios_per_sec": 0, 00:09:34.074 "rw_mbytes_per_sec": 0, 00:09:34.074 "r_mbytes_per_sec": 0, 00:09:34.074 "w_mbytes_per_sec": 0 00:09:34.074 }, 00:09:34.074 "claimed": false, 00:09:34.074 "zoned": false, 00:09:34.074 "supported_io_types": { 00:09:34.074 "read": true, 00:09:34.074 "write": true, 00:09:34.074 "unmap": true, 00:09:34.074 "flush": true, 00:09:34.074 "reset": true, 00:09:34.074 "nvme_admin": false, 00:09:34.074 "nvme_io": false, 00:09:34.074 "nvme_io_md": false, 00:09:34.074 "write_zeroes": true, 00:09:34.074 "zcopy": false, 00:09:34.074 "get_zone_info": false, 00:09:34.074 "zone_management": false, 00:09:34.074 "zone_append": false, 00:09:34.074 "compare": true, 00:09:34.074 "compare_and_write": false, 00:09:34.074 "abort": true, 00:09:34.074 "seek_hole": false, 00:09:34.074 "seek_data": false, 00:09:34.074 "copy": true, 00:09:34.074 "nvme_iov_md": false 00:09:34.074 }, 00:09:34.074 "driver_specific": { 00:09:34.074 "gpt": { 00:09:34.074 "base_bdev": "Nvme1n1", 00:09:34.074 "offset_blocks": 655360, 00:09:34.074 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:34.074 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:34.074 "partition_name": "SPDK_TEST_second" 00:09:34.074 } 00:09:34.074 } 00:09:34.074 } 00:09:34.074 ]' 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63543 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 63543 ']' 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 63543 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:34.074 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63543 00:09:34.333 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:34.333 killing process with pid 63543 00:09:34.334 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:34.334 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63543' 00:09:34.334 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 63543 00:09:34.334 03:19:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 63543 00:09:36.865 00:09:36.865 real 0m4.747s 00:09:36.865 user 0m4.644s 00:09:36.865 sys 0m0.730s 00:09:36.865 03:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:36.865 03:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:36.865 ************************************ 00:09:36.865 END TEST bdev_gpt_uuid 00:09:36.865 ************************************ 00:09:36.865 03:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:09:36.865 03:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:36.865 03:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:09:36.865 03:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:36.866 03:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:36.866 03:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:36.866 03:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:36.866 03:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:36.866 03:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:37.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.692 Waiting for block devices as requested 00:09:37.951 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.951 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.951 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.211 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.536 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:43.536 03:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:43.536 03:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:43.536 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:43.536 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:43.536 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:43.536 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:43.536 03:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:43.536 00:09:43.536 real 1m7.175s 00:09:43.536 user 1m22.403s 00:09:43.536 sys 0m13.221s 00:09:43.536 03:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:43.536 03:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:43.536 ************************************ 00:09:43.536 END TEST blockdev_nvme_gpt 00:09:43.536 ************************************ 00:09:43.536 03:20:07 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:43.536 03:20:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:43.537 03:20:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:43.537 03:20:07 -- common/autotest_common.sh@10 -- # set +x 00:09:43.537 ************************************ 00:09:43.537 START TEST nvme 00:09:43.537 ************************************ 00:09:43.537 03:20:07 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:43.795 * Looking for test storage... 00:09:43.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:43.795 03:20:07 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:43.795 03:20:07 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:09:43.795 03:20:07 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:43.795 03:20:07 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:43.795 03:20:07 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.795 03:20:07 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.795 03:20:07 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.795 03:20:07 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.795 03:20:07 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.795 03:20:07 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.795 03:20:07 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.795 03:20:07 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.795 03:20:07 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.795 03:20:07 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.795 03:20:07 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.795 03:20:07 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:43.795 03:20:07 nvme -- scripts/common.sh@345 -- # : 1 00:09:43.795 03:20:07 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.795 03:20:07 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.795 03:20:07 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:43.795 03:20:07 nvme -- scripts/common.sh@353 -- # local d=1 00:09:43.795 03:20:07 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.795 03:20:07 nvme -- scripts/common.sh@355 -- # echo 1 00:09:43.795 03:20:07 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.795 03:20:07 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:43.795 03:20:07 nvme -- scripts/common.sh@353 -- # local d=2 00:09:43.795 03:20:07 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.795 03:20:07 nvme -- scripts/common.sh@355 -- # echo 2 00:09:43.795 03:20:07 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.795 03:20:07 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.795 03:20:07 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.795 03:20:07 nvme -- scripts/common.sh@368 -- # return 0 00:09:43.795 03:20:07 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.795 03:20:07 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:43.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.795 --rc genhtml_branch_coverage=1 00:09:43.795 --rc genhtml_function_coverage=1 00:09:43.795 --rc genhtml_legend=1 00:09:43.795 --rc geninfo_all_blocks=1 00:09:43.795 --rc geninfo_unexecuted_blocks=1 00:09:43.795 00:09:43.795 ' 00:09:43.795 03:20:07 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:43.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.795 --rc genhtml_branch_coverage=1 00:09:43.795 --rc genhtml_function_coverage=1 00:09:43.795 --rc genhtml_legend=1 00:09:43.796 --rc geninfo_all_blocks=1 00:09:43.796 --rc geninfo_unexecuted_blocks=1 00:09:43.796 00:09:43.796 ' 00:09:43.796 03:20:07 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:43.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.796 --rc genhtml_branch_coverage=1 00:09:43.796 --rc genhtml_function_coverage=1 00:09:43.796 --rc genhtml_legend=1 00:09:43.796 --rc geninfo_all_blocks=1 00:09:43.796 --rc geninfo_unexecuted_blocks=1 00:09:43.796 00:09:43.796 ' 00:09:43.796 03:20:07 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:43.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.796 --rc genhtml_branch_coverage=1 00:09:43.796 --rc genhtml_function_coverage=1 00:09:43.796 --rc genhtml_legend=1 00:09:43.796 --rc geninfo_all_blocks=1 00:09:43.796 --rc geninfo_unexecuted_blocks=1 00:09:43.796 00:09:43.796 ' 00:09:43.796 03:20:07 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:44.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:45.298 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:45.298 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:45.298 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:45.298 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:45.557 03:20:08 nvme -- nvme/nvme.sh@79 -- # uname 00:09:45.557 03:20:08 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:45.557 03:20:08 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:45.557 03:20:08 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:45.557 03:20:08 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:45.557 03:20:08 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:09:45.557 03:20:08 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:09:45.557 03:20:08 nvme -- common/autotest_common.sh@1073 -- # stubpid=64208 00:09:45.557 03:20:08 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:45.557 Waiting for stub to ready for secondary processes... 00:09:45.557 03:20:08 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:09:45.557 03:20:08 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:45.557 03:20:08 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64208 ]] 00:09:45.557 03:20:08 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:09:45.557 [2024-11-05 03:20:08.988730] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:09:45.557 [2024-11-05 03:20:08.988854] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:46.494 03:20:09 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:46.494 03:20:09 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64208 ]] 00:09:46.494 03:20:09 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:09:47.462 [2024-11-05 03:20:10.649102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.462 [2024-11-05 03:20:10.777847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.462 [2024-11-05 03:20:10.778019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.462 [2024-11-05 03:20:10.778064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.462 [2024-11-05 03:20:10.796435] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:47.462 [2024-11-05 03:20:10.796475] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:47.462 [2024-11-05 03:20:10.812281] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:47.462 [2024-11-05 03:20:10.812410] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:47.462 [2024-11-05 03:20:10.815385] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:47.462 [2024-11-05 03:20:10.815609] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:47.462 [2024-11-05 03:20:10.815678] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:47.462 [2024-11-05 03:20:10.818982] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:47.462 [2024-11-05 03:20:10.819216] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:47.462 [2024-11-05 03:20:10.819333] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:47.462 [2024-11-05 03:20:10.823416] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:47.462 [2024-11-05 03:20:10.823657] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:47.462 [2024-11-05 03:20:10.823765] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:47.462 [2024-11-05 03:20:10.823841] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:47.462 [2024-11-05 03:20:10.823910] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:47.462 03:20:10 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:47.462 done. 00:09:47.462 03:20:10 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:09:47.462 03:20:10 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:47.462 03:20:10 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:09:47.462 03:20:10 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.462 03:20:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:47.462 ************************************ 00:09:47.462 START TEST nvme_reset 00:09:47.462 ************************************ 00:09:47.462 03:20:10 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:47.723 Initializing NVMe Controllers 00:09:47.723 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:47.723 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:47.723 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:47.723 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:47.723 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:47.723 00:09:47.723 real 0m0.353s 00:09:47.723 user 0m0.108s 00:09:47.723 sys 0m0.179s 00:09:47.723 03:20:11 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.723 03:20:11 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:47.982 ************************************ 00:09:47.982 END TEST nvme_reset 00:09:47.982 ************************************ 00:09:47.982 03:20:11 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:47.982 03:20:11 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:47.982 03:20:11 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.982 03:20:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:47.982 ************************************ 00:09:47.982 START TEST nvme_identify 00:09:47.982 ************************************ 00:09:47.982 03:20:11 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:09:47.982 03:20:11 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:47.982 03:20:11 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:47.982 03:20:11 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:47.982 03:20:11 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:47.982 03:20:11 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:47.982 03:20:11 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:09:47.982 03:20:11 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:47.982 03:20:11 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:47.982 03:20:11 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:47.982 03:20:11 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:47.982 03:20:11 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:47.982 03:20:11 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:48.244 ===================================================== 00:09:48.244 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:48.244 ===================================================== 00:09:48.244 Controller Capabilities/Features 00:09:48.244 ================================ 00:09:48.244 Vendor ID: 1b36 00:09:48.244 Subsystem Vendor ID: 1af4 00:09:48.244 Serial Number: 12340 00:09:48.244 Model Number: QEMU NVMe Ctrl 00:09:48.244 Firmware Version: 8.0.0 00:09:48.244 Recommended Arb Burst: 6 00:09:48.244 IEEE OUI Identifier: 00 54 52 00:09:48.244 Multi-path I/O 00:09:48.244 May have multiple subsystem ports: No 00:09:48.244 May have multiple controllers: No 00:09:48.244 Associated with SR-IOV VF: No 00:09:48.244 Max Data Transfer Size: 524288 00:09:48.244 Max Number of Namespaces: 256 00:09:48.244 Max Number of I/O Queues: 64 00:09:48.244 NVMe Specification Version (VS): 1.4 00:09:48.244 NVMe Specification Version (Identify): 1.4 00:09:48.244 Maximum Queue Entries: 2048 00:09:48.244 Contiguous Queues Required: Yes 00:09:48.244 Arbitration Mechanisms Supported 00:09:48.244 Weighted Round Robin: Not Supported 00:09:48.244 Vendor Specific: Not Supported 00:09:48.244 Reset Timeout: 7500 ms 00:09:48.244 Doorbell Stride: 4 bytes 00:09:48.244 NVM Subsystem Reset: Not Supported 00:09:48.244 Command Sets Supported 00:09:48.244 NVM Command Set: Supported 00:09:48.244 Boot Partition: Not Supported 00:09:48.244 Memory Page Size Minimum: 4096 bytes 00:09:48.244 Memory Page Size Maximum: 65536 bytes 00:09:48.244 Persistent Memory Region: Not Supported 00:09:48.244 Optional Asynchronous Events Supported 00:09:48.244 Namespace Attribute Notices: Supported 00:09:48.244 Firmware Activation Notices: Not Supported 00:09:48.244 ANA Change Notices: Not Supported 00:09:48.244 PLE Aggregate Log Change Notices: Not Supported 00:09:48.244 LBA Status Info Alert Notices: Not Supported 00:09:48.244 EGE Aggregate Log Change Notices: Not Supported 00:09:48.244 Normal NVM Subsystem Shutdown event: Not Supported 00:09:48.244 Zone Descriptor Change Notices: Not Supported 00:09:48.244 Discovery Log Change Notices: Not Supported 00:09:48.244 Controller Attributes 00:09:48.244 128-bit Host Identifier: Not Supported 00:09:48.244 Non-Operational Permissive Mode: Not Supported 00:09:48.244 NVM Sets: Not Supported 00:09:48.244 Read Recovery Levels: Not Supported 00:09:48.244 Endurance Groups: Not Supported 00:09:48.244 Predictable Latency Mode: Not Supported 00:09:48.244 Traffic Based Keep ALive: Not Supported 00:09:48.244 Namespace Granularity: Not Supported 00:09:48.244 SQ Associations: Not Supported 00:09:48.244 UUID List: Not Supported 00:09:48.244 Multi-Domain Subsystem: Not Supported 00:09:48.244 Fixed Capacity Management: Not Supported 00:09:48.244 Variable Capacity Management: Not Supported 00:09:48.244 Delete Endurance Group: Not Supported 00:09:48.244 Delete NVM Set: Not Supported 00:09:48.244 Extended LBA Formats Supported: Supported 00:09:48.244 Flexible Data Placement Supported: Not Supported 00:09:48.244 00:09:48.244 Controller Memory Buffer Support 00:09:48.244 ================================ 00:09:48.244 Supported: No 00:09:48.244 00:09:48.244 Persistent Memory Region Support 00:09:48.244 ================================ 00:09:48.244 Supported: No 00:09:48.244 00:09:48.244 Admin Command Set Attributes 00:09:48.244 ============================ 00:09:48.244 Security Send/Receive: Not Supported 00:09:48.244 Format NVM: Supported 00:09:48.244 Firmware Activate/Download: Not Supported 00:09:48.244 Namespace Management: Supported 00:09:48.244 Device Self-Test: Not Supported 00:09:48.244 Directives: Supported 00:09:48.244 NVMe-MI: Not Supported 00:09:48.244 Virtualization Management: Not Supported 00:09:48.244 Doorbell Buffer Config: Supported 00:09:48.244 Get LBA Status Capability: Not Supported 00:09:48.244 Command & Feature Lockdown Capability: Not Supported 00:09:48.244 Abort Command Limit: 4 00:09:48.244 Async Event Request Limit: 4 00:09:48.244 Number of Firmware Slots: N/A 00:09:48.244 Firmware Slot 1 Read-Only: N/A 00:09:48.244 Firmware Activation Without Reset: N/A 00:09:48.244 Multiple Update Detection Support: N/A 00:09:48.244 Firmware Update Granularity: No Information Provided 00:09:48.244 Per-Namespace SMART Log: Yes 00:09:48.244 Asymmetric Namespace Access Log Page: Not Supported 00:09:48.244 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:48.244 Command Effects Log Page: Supported 00:09:48.244 Get Log Page Extended Data: Supported 00:09:48.244 Telemetry Log Pages: Not Supported 00:09:48.244 Persistent Event Log Pages: Not Supported 00:09:48.244 Supported Log Pages Log Page: May Support 00:09:48.244 Commands Supported & Effects Log Page: Not Supported 00:09:48.244 Feature Identifiers & Effects Log Page:May Support 00:09:48.244 NVMe-MI Commands & Effects Log Page: May Support 00:09:48.244 Data Area 4 for Telemetry Log: Not Supported 00:09:48.244 Error Log Page Entries Supported: 1 00:09:48.244 Keep Alive: Not Supported 00:09:48.244 00:09:48.244 NVM Command Set Attributes 00:09:48.244 ========================== 00:09:48.244 Submission Queue Entry Size 00:09:48.244 Max: 64 00:09:48.245 Min: 64 00:09:48.245 Completion Queue Entry Size 00:09:48.245 Max: 16 00:09:48.245 Min: 16 00:09:48.245 Number of Namespaces: 256 00:09:48.245 Compare Command: Supported 00:09:48.245 Write Uncorrectable Command: Not Supported 00:09:48.245 Dataset Management Command: Supported 00:09:48.245 Write Zeroes Command: Supported 00:09:48.245 Set Features Save Field: Supported 00:09:48.245 Reservations: Not Supported 00:09:48.245 Timestamp: Supported 00:09:48.245 Copy: Supported 00:09:48.245 Volatile Write Cache: Present 00:09:48.245 Atomic Write Unit (Normal): 1 00:09:48.245 Atomic Write Unit (PFail): 1 00:09:48.245 Atomic Compare & Write Unit: 1 00:09:48.245 Fused Compare & Write: Not Supported 00:09:48.245 Scatter-Gather List 00:09:48.245 SGL Command Set: Supported 00:09:48.245 SGL Keyed: Not Supported 00:09:48.245 SGL Bit Bucket Descriptor: Not Supported 00:09:48.245 SGL Metadata Pointer: Not Supported 00:09:48.245 Oversized SGL: Not Supported 00:09:48.245 SGL Metadata Address: Not Supported 00:09:48.245 SGL Offset: Not Supported 00:09:48.245 Transport SGL Data Block: Not Supported 00:09:48.245 Replay Protected Memory Block: Not Supported 00:09:48.245 00:09:48.245 Firmware Slot Information 00:09:48.245 ========================= 00:09:48.245 Active slot: 1 00:09:48.245 Slot 1 Firmware Revision: 1.0 00:09:48.245 00:09:48.245 00:09:48.245 Commands Supported and Effects 00:09:48.245 ============================== 00:09:48.245 Admin Commands 00:09:48.245 -------------- 00:09:48.245 Delete I/O Submission Queue (00h): Supported 00:09:48.245 Create I/O Submission Queue (01h): Supported 00:09:48.245 Get Log Page (02h): Supported 00:09:48.245 Delete I/O Completion Queue (04h): Supported 00:09:48.245 Create I/O Completion Queue (05h): Supported 00:09:48.245 Identify (06h): Supported 00:09:48.245 Abort (08h): Supported 00:09:48.245 Set Features (09h): Supported 00:09:48.245 Get Features (0Ah): Supported 00:09:48.245 Asynchronous Event Request (0Ch): Supported 00:09:48.245 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:48.245 Directive Send (19h): Supported 00:09:48.245 Directive Receive (1Ah): Supported 00:09:48.245 Virtualization Management (1Ch): Supported 00:09:48.245 Doorbell Buffer Config (7Ch): Supported 00:09:48.245 Format NVM (80h): Supported LBA-Change 00:09:48.245 I/O Commands 00:09:48.245 ------------ 00:09:48.245 Flush (00h): Supported LBA-Change 00:09:48.245 Write (01h): Supported LBA-Change 00:09:48.245 Read (02h): Supported 00:09:48.245 Compare (05h): Supported 00:09:48.245 Write Zeroes (08h): Supported LBA-Change 00:09:48.245 Dataset Management (09h): Supported LBA-Change 00:09:48.245 Unknown (0Ch): Supported 00:09:48.245 Unknown (12h): Supported 00:09:48.245 Copy (19h): Supported LBA-Change 00:09:48.245 Unknown (1Dh): Supported LBA-Change 00:09:48.245 00:09:48.245 Error Log 00:09:48.245 ========= 00:09:48.245 00:09:48.245 Arbitration 00:09:48.245 =========== 00:09:48.245 Arbitration Burst: no limit 00:09:48.245 00:09:48.245 Power Management 00:09:48.245 ================ 00:09:48.245 Number of Power States: 1 00:09:48.245 Current Power State: Power State #0 00:09:48.245 Power State #0: 00:09:48.245 Max Power: 25.00 W 00:09:48.245 Non-Operational State: Operational 00:09:48.245 Entry Latency: 16 microseconds 00:09:48.245 Exit Latency: 4 microseconds 00:09:48.245 Relative Read Throughput: 0 00:09:48.245 Relative Read Latency: 0 00:09:48.245 Relative Write Throughput: 0 00:09:48.245 Relative Write Latency: 0 00:09:48.245 Idle Power[2024-11-05 03:20:11.717365] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64239 terminated unexpected 00:09:48.245 [2024-11-05 03:20:11.718580] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64239 terminated unexpected 00:09:48.245 : Not Reported 00:09:48.245 Active Power: Not Reported 00:09:48.245 Non-Operational Permissive Mode: Not Supported 00:09:48.245 00:09:48.245 Health Information 00:09:48.245 ================== 00:09:48.245 Critical Warnings: 00:09:48.245 Available Spare Space: OK 00:09:48.245 Temperature: OK 00:09:48.245 Device Reliability: OK 00:09:48.245 Read Only: No 00:09:48.245 Volatile Memory Backup: OK 00:09:48.245 Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.245 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:48.245 Available Spare: 0% 00:09:48.245 Available Spare Threshold: 0% 00:09:48.245 Life Percentage Used: 0% 00:09:48.245 Data Units Read: 738 00:09:48.245 Data Units Written: 666 00:09:48.245 Host Read Commands: 32965 00:09:48.245 Host Write Commands: 32751 00:09:48.245 Controller Busy Time: 0 minutes 00:09:48.245 Power Cycles: 0 00:09:48.245 Power On Hours: 0 hours 00:09:48.245 Unsafe Shutdowns: 0 00:09:48.245 Unrecoverable Media Errors: 0 00:09:48.245 Lifetime Error Log Entries: 0 00:09:48.245 Warning Temperature Time: 0 minutes 00:09:48.245 Critical Temperature Time: 0 minutes 00:09:48.245 00:09:48.245 Number of Queues 00:09:48.245 ================ 00:09:48.245 Number of I/O Submission Queues: 64 00:09:48.245 Number of I/O Completion Queues: 64 00:09:48.245 00:09:48.245 ZNS Specific Controller Data 00:09:48.245 ============================ 00:09:48.245 Zone Append Size Limit: 0 00:09:48.245 00:09:48.245 00:09:48.245 Active Namespaces 00:09:48.245 ================= 00:09:48.245 Namespace ID:1 00:09:48.245 Error Recovery Timeout: Unlimited 00:09:48.245 Command Set Identifier: NVM (00h) 00:09:48.245 Deallocate: Supported 00:09:48.245 Deallocated/Unwritten Error: Supported 00:09:48.245 Deallocated Read Value: All 0x00 00:09:48.245 Deallocate in Write Zeroes: Not Supported 00:09:48.245 Deallocated Guard Field: 0xFFFF 00:09:48.245 Flush: Supported 00:09:48.245 Reservation: Not Supported 00:09:48.245 Metadata Transferred as: Separate Metadata Buffer 00:09:48.245 Namespace Sharing Capabilities: Private 00:09:48.245 Size (in LBAs): 1548666 (5GiB) 00:09:48.245 Capacity (in LBAs): 1548666 (5GiB) 00:09:48.245 Utilization (in LBAs): 1548666 (5GiB) 00:09:48.245 Thin Provisioning: Not Supported 00:09:48.245 Per-NS Atomic Units: No 00:09:48.245 Maximum Single Source Range Length: 128 00:09:48.245 Maximum Copy Length: 128 00:09:48.245 Maximum Source Range Count: 128 00:09:48.245 NGUID/EUI64 Never Reused: No 00:09:48.245 Namespace Write Protected: No 00:09:48.245 Number of LBA Formats: 8 00:09:48.245 Current LBA Format: LBA Format #07 00:09:48.245 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:48.245 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:48.245 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:48.245 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:48.245 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:48.245 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:48.245 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:48.245 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:48.245 00:09:48.245 NVM Specific Namespace Data 00:09:48.245 =========================== 00:09:48.245 Logical Block Storage Tag Mask: 0 00:09:48.245 Protection Information Capabilities: 00:09:48.245 16b Guard Protection Information Storage Tag Support: No 00:09:48.245 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:48.245 Storage Tag Check Read Support: No 00:09:48.245 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.245 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.245 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.245 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.245 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.245 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.245 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.245 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.245 ===================================================== 00:09:48.245 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:48.245 ===================================================== 00:09:48.245 Controller Capabilities/Features 00:09:48.245 ================================ 00:09:48.245 Vendor ID: 1b36 00:09:48.245 Subsystem Vendor ID: 1af4 00:09:48.245 Serial Number: 12341 00:09:48.245 Model Number: QEMU NVMe Ctrl 00:09:48.245 Firmware Version: 8.0.0 00:09:48.245 Recommended Arb Burst: 6 00:09:48.246 IEEE OUI Identifier: 00 54 52 00:09:48.246 Multi-path I/O 00:09:48.246 May have multiple subsystem ports: No 00:09:48.246 May have multiple controllers: No 00:09:48.246 Associated with SR-IOV VF: No 00:09:48.246 Max Data Transfer Size: 524288 00:09:48.246 Max Number of Namespaces: 256 00:09:48.246 Max Number of I/O Queues: 64 00:09:48.246 NVMe Specification Version (VS): 1.4 00:09:48.246 NVMe Specification Version (Identify): 1.4 00:09:48.246 Maximum Queue Entries: 2048 00:09:48.246 Contiguous Queues Required: Yes 00:09:48.246 Arbitration Mechanisms Supported 00:09:48.246 Weighted Round Robin: Not Supported 00:09:48.246 Vendor Specific: Not Supported 00:09:48.246 Reset Timeout: 7500 ms 00:09:48.246 Doorbell Stride: 4 bytes 00:09:48.246 NVM Subsystem Reset: Not Supported 00:09:48.246 Command Sets Supported 00:09:48.246 NVM Command Set: Supported 00:09:48.246 Boot Partition: Not Supported 00:09:48.246 Memory Page Size Minimum: 4096 bytes 00:09:48.246 Memory Page Size Maximum: 65536 bytes 00:09:48.246 Persistent Memory Region: Not Supported 00:09:48.246 Optional Asynchronous Events Supported 00:09:48.246 Namespace Attribute Notices: Supported 00:09:48.246 Firmware Activation Notices: Not Supported 00:09:48.246 ANA Change Notices: Not Supported 00:09:48.246 PLE Aggregate Log Change Notices: Not Supported 00:09:48.246 LBA Status Info Alert Notices: Not Supported 00:09:48.246 EGE Aggregate Log Change Notices: Not Supported 00:09:48.246 Normal NVM Subsystem Shutdown event: Not Supported 00:09:48.246 Zone Descriptor Change Notices: Not Supported 00:09:48.246 Discovery Log Change Notices: Not Supported 00:09:48.246 Controller Attributes 00:09:48.246 128-bit Host Identifier: Not Supported 00:09:48.246 Non-Operational Permissive Mode: Not Supported 00:09:48.246 NVM Sets: Not Supported 00:09:48.246 Read Recovery Levels: Not Supported 00:09:48.246 Endurance Groups: Not Supported 00:09:48.246 Predictable Latency Mode: Not Supported 00:09:48.246 Traffic Based Keep ALive: Not Supported 00:09:48.246 Namespace Granularity: Not Supported 00:09:48.246 SQ Associations: Not Supported 00:09:48.246 UUID List: Not Supported 00:09:48.246 Multi-Domain Subsystem: Not Supported 00:09:48.246 Fixed Capacity Management: Not Supported 00:09:48.246 Variable Capacity Management: Not Supported 00:09:48.246 Delete Endurance Group: Not Supported 00:09:48.246 Delete NVM Set: Not Supported 00:09:48.246 Extended LBA Formats Supported: Supported 00:09:48.246 Flexible Data Placement Supported: Not Supported 00:09:48.246 00:09:48.246 Controller Memory Buffer Support 00:09:48.246 ================================ 00:09:48.246 Supported: No 00:09:48.246 00:09:48.246 Persistent Memory Region Support 00:09:48.246 ================================ 00:09:48.246 Supported: No 00:09:48.246 00:09:48.246 Admin Command Set Attributes 00:09:48.246 ============================ 00:09:48.246 Security Send/Receive: Not Supported 00:09:48.246 Format NVM: Supported 00:09:48.246 Firmware Activate/Download: Not Supported 00:09:48.246 Namespace Management: Supported 00:09:48.246 Device Self-Test: Not Supported 00:09:48.246 Directives: Supported 00:09:48.246 NVMe-MI: Not Supported 00:09:48.246 Virtualization Management: Not Supported 00:09:48.246 Doorbell Buffer Config: Supported 00:09:48.246 Get LBA Status Capability: Not Supported 00:09:48.246 Command & Feature Lockdown Capability: Not Supported 00:09:48.246 Abort Command Limit: 4 00:09:48.246 Async Event Request Limit: 4 00:09:48.246 Number of Firmware Slots: N/A 00:09:48.246 Firmware Slot 1 Read-Only: N/A 00:09:48.246 Firmware Activation Without Reset: N/A 00:09:48.246 Multiple Update Detection Support: N/A 00:09:48.246 Firmware Update Granularity: No Information Provided 00:09:48.246 Per-Namespace SMART Log: Yes 00:09:48.246 Asymmetric Namespace Access Log Page: Not Supported 00:09:48.246 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:48.246 Command Effects Log Page: Supported 00:09:48.246 Get Log Page Extended Data: Supported 00:09:48.246 Telemetry Log Pages: Not Supported 00:09:48.246 Persistent Event Log Pages: Not Supported 00:09:48.246 Supported Log Pages Log Page: May Support 00:09:48.246 Commands Supported & Effects Log Page: Not Supported 00:09:48.246 Feature Identifiers & Effects Log Page:May Support 00:09:48.246 NVMe-MI Commands & Effects Log Page: May Support 00:09:48.246 Data Area 4 for Telemetry Log: Not Supported 00:09:48.246 Error Log Page Entries Supported: 1 00:09:48.246 Keep Alive: Not Supported 00:09:48.246 00:09:48.246 NVM Command Set Attributes 00:09:48.246 ========================== 00:09:48.246 Submission Queue Entry Size 00:09:48.246 Max: 64 00:09:48.246 Min: 64 00:09:48.246 Completion Queue Entry Size 00:09:48.246 Max: 16 00:09:48.246 Min: 16 00:09:48.246 Number of Namespaces: 256 00:09:48.246 Compare Command: Supported 00:09:48.246 Write Uncorrectable Command: Not Supported 00:09:48.246 Dataset Management Command: Supported 00:09:48.246 Write Zeroes Command: Supported 00:09:48.246 Set Features Save Field: Supported 00:09:48.246 Reservations: Not Supported 00:09:48.246 Timestamp: Supported 00:09:48.246 Copy: Supported 00:09:48.246 Volatile Write Cache: Present 00:09:48.246 Atomic Write Unit (Normal): 1 00:09:48.246 Atomic Write Unit (PFail): 1 00:09:48.246 Atomic Compare & Write Unit: 1 00:09:48.246 Fused Compare & Write: Not Supported 00:09:48.246 Scatter-Gather List 00:09:48.246 SGL Command Set: Supported 00:09:48.246 SGL Keyed: Not Supported 00:09:48.246 SGL Bit Bucket Descriptor: Not Supported 00:09:48.246 SGL Metadata Pointer: Not Supported 00:09:48.246 Oversized SGL: Not Supported 00:09:48.246 SGL Metadata Address: Not Supported 00:09:48.246 SGL Offset: Not Supported 00:09:48.246 Transport SGL Data Block: Not Supported 00:09:48.246 Replay Protected Memory Block: Not Supported 00:09:48.246 00:09:48.246 Firmware Slot Information 00:09:48.246 ========================= 00:09:48.246 Active slot: 1 00:09:48.246 Slot 1 Firmware Revision: 1.0 00:09:48.246 00:09:48.246 00:09:48.246 Commands Supported and Effects 00:09:48.246 ============================== 00:09:48.246 Admin Commands 00:09:48.246 -------------- 00:09:48.246 Delete I/O Submission Queue (00h): Supported 00:09:48.246 Create I/O Submission Queue (01h): Supported 00:09:48.246 Get Log Page (02h): Supported 00:09:48.246 Delete I/O Completion Queue (04h): Supported 00:09:48.246 Create I/O Completion Queue (05h): Supported 00:09:48.246 Identify (06h): Supported 00:09:48.246 Abort (08h): Supported 00:09:48.246 Set Features (09h): Supported 00:09:48.246 Get Features (0Ah): Supported 00:09:48.246 Asynchronous Event Request (0Ch): Supported 00:09:48.246 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:48.246 Directive Send (19h): Supported 00:09:48.246 Directive Receive (1Ah): Supported 00:09:48.246 Virtualization Management (1Ch): Supported 00:09:48.246 Doorbell Buffer Config (7Ch): Supported 00:09:48.246 Format NVM (80h): Supported LBA-Change 00:09:48.246 I/O Commands 00:09:48.246 ------------ 00:09:48.246 Flush (00h): Supported LBA-Change 00:09:48.246 Write (01h): Supported LBA-Change 00:09:48.246 Read (02h): Supported 00:09:48.246 Compare (05h): Supported 00:09:48.246 Write Zeroes (08h): Supported LBA-Change 00:09:48.246 Dataset Management (09h): Supported LBA-Change 00:09:48.246 Unknown (0Ch): Supported 00:09:48.246 Unknown (12h): Supported 00:09:48.246 Copy (19h): Supported LBA-Change 00:09:48.246 Unknown (1Dh): Supported LBA-Change 00:09:48.246 00:09:48.246 Error Log 00:09:48.246 ========= 00:09:48.246 00:09:48.246 Arbitration 00:09:48.246 =========== 00:09:48.246 Arbitration Burst: no limit 00:09:48.246 00:09:48.246 Power Management 00:09:48.246 ================ 00:09:48.246 Number of Power States: 1 00:09:48.246 Current Power State: Power State #0 00:09:48.246 Power State #0: 00:09:48.246 Max Power: 25.00 W 00:09:48.246 Non-Operational State: Operational 00:09:48.246 Entry Latency: 16 microseconds 00:09:48.246 Exit Latency: 4 microseconds 00:09:48.246 Relative Read Throughput: 0 00:09:48.246 Relative Read Latency: 0 00:09:48.246 Relative Write Throughput: 0 00:09:48.246 Relative Write Latency: 0 00:09:48.246 Idle Power: Not Reported 00:09:48.246 Active Power: Not Reported 00:09:48.246 Non-Operational Permissive Mode: Not Supported 00:09:48.246 00:09:48.246 Health Information 00:09:48.246 ================== 00:09:48.246 Critical Warnings: 00:09:48.246 Available Spare Space: OK 00:09:48.247 Temperature: [2024-11-05 03:20:11.719202] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64239 terminated unexpected 00:09:48.247 OK 00:09:48.247 Device Reliability: OK 00:09:48.247 Read Only: No 00:09:48.247 Volatile Memory Backup: OK 00:09:48.247 Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.247 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:48.247 Available Spare: 0% 00:09:48.247 Available Spare Threshold: 0% 00:09:48.247 Life Percentage Used: 0% 00:09:48.247 Data Units Read: 1130 00:09:48.247 Data Units Written: 996 00:09:48.247 Host Read Commands: 48125 00:09:48.247 Host Write Commands: 46914 00:09:48.247 Controller Busy Time: 0 minutes 00:09:48.247 Power Cycles: 0 00:09:48.247 Power On Hours: 0 hours 00:09:48.247 Unsafe Shutdowns: 0 00:09:48.247 Unrecoverable Media Errors: 0 00:09:48.247 Lifetime Error Log Entries: 0 00:09:48.247 Warning Temperature Time: 0 minutes 00:09:48.247 Critical Temperature Time: 0 minutes 00:09:48.247 00:09:48.247 Number of Queues 00:09:48.247 ================ 00:09:48.247 Number of I/O Submission Queues: 64 00:09:48.247 Number of I/O Completion Queues: 64 00:09:48.247 00:09:48.247 ZNS Specific Controller Data 00:09:48.247 ============================ 00:09:48.247 Zone Append Size Limit: 0 00:09:48.247 00:09:48.247 00:09:48.247 Active Namespaces 00:09:48.247 ================= 00:09:48.247 Namespace ID:1 00:09:48.247 Error Recovery Timeout: Unlimited 00:09:48.247 Command Set Identifier: NVM (00h) 00:09:48.247 Deallocate: Supported 00:09:48.247 Deallocated/Unwritten Error: Supported 00:09:48.247 Deallocated Read Value: All 0x00 00:09:48.247 Deallocate in Write Zeroes: Not Supported 00:09:48.247 Deallocated Guard Field: 0xFFFF 00:09:48.247 Flush: Supported 00:09:48.247 Reservation: Not Supported 00:09:48.247 Namespace Sharing Capabilities: Private 00:09:48.247 Size (in LBAs): 1310720 (5GiB) 00:09:48.247 Capacity (in LBAs): 1310720 (5GiB) 00:09:48.247 Utilization (in LBAs): 1310720 (5GiB) 00:09:48.247 Thin Provisioning: Not Supported 00:09:48.247 Per-NS Atomic Units: No 00:09:48.247 Maximum Single Source Range Length: 128 00:09:48.247 Maximum Copy Length: 128 00:09:48.247 Maximum Source Range Count: 128 00:09:48.247 NGUID/EUI64 Never Reused: No 00:09:48.247 Namespace Write Protected: No 00:09:48.247 Number of LBA Formats: 8 00:09:48.247 Current LBA Format: LBA Format #04 00:09:48.247 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:48.247 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:48.247 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:48.247 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:48.247 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:48.247 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:48.247 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:48.247 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:48.247 00:09:48.247 NVM Specific Namespace Data 00:09:48.247 =========================== 00:09:48.247 Logical Block Storage Tag Mask: 0 00:09:48.247 Protection Information Capabilities: 00:09:48.247 16b Guard Protection Information Storage Tag Support: No 00:09:48.247 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:48.247 Storage Tag Check Read Support: No 00:09:48.247 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.247 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.247 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.247 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.247 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.247 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.247 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.247 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.247 ===================================================== 00:09:48.247 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:48.247 ===================================================== 00:09:48.247 Controller Capabilities/Features 00:09:48.247 ================================ 00:09:48.247 Vendor ID: 1b36 00:09:48.247 Subsystem Vendor ID: 1af4 00:09:48.247 Serial Number: 12343 00:09:48.247 Model Number: QEMU NVMe Ctrl 00:09:48.247 Firmware Version: 8.0.0 00:09:48.247 Recommended Arb Burst: 6 00:09:48.247 IEEE OUI Identifier: 00 54 52 00:09:48.247 Multi-path I/O 00:09:48.247 May have multiple subsystem ports: No 00:09:48.247 May have multiple controllers: Yes 00:09:48.247 Associated with SR-IOV VF: No 00:09:48.247 Max Data Transfer Size: 524288 00:09:48.247 Max Number of Namespaces: 256 00:09:48.247 Max Number of I/O Queues: 64 00:09:48.247 NVMe Specification Version (VS): 1.4 00:09:48.247 NVMe Specification Version (Identify): 1.4 00:09:48.247 Maximum Queue Entries: 2048 00:09:48.247 Contiguous Queues Required: Yes 00:09:48.247 Arbitration Mechanisms Supported 00:09:48.247 Weighted Round Robin: Not Supported 00:09:48.247 Vendor Specific: Not Supported 00:09:48.247 Reset Timeout: 7500 ms 00:09:48.247 Doorbell Stride: 4 bytes 00:09:48.247 NVM Subsystem Reset: Not Supported 00:09:48.247 Command Sets Supported 00:09:48.247 NVM Command Set: Supported 00:09:48.247 Boot Partition: Not Supported 00:09:48.247 Memory Page Size Minimum: 4096 bytes 00:09:48.247 Memory Page Size Maximum: 65536 bytes 00:09:48.247 Persistent Memory Region: Not Supported 00:09:48.247 Optional Asynchronous Events Supported 00:09:48.247 Namespace Attribute Notices: Supported 00:09:48.247 Firmware Activation Notices: Not Supported 00:09:48.247 ANA Change Notices: Not Supported 00:09:48.247 PLE Aggregate Log Change Notices: Not Supported 00:09:48.247 LBA Status Info Alert Notices: Not Supported 00:09:48.247 EGE Aggregate Log Change Notices: Not Supported 00:09:48.247 Normal NVM Subsystem Shutdown event: Not Supported 00:09:48.247 Zone Descriptor Change Notices: Not Supported 00:09:48.247 Discovery Log Change Notices: Not Supported 00:09:48.247 Controller Attributes 00:09:48.247 128-bit Host Identifier: Not Supported 00:09:48.247 Non-Operational Permissive Mode: Not Supported 00:09:48.247 NVM Sets: Not Supported 00:09:48.247 Read Recovery Levels: Not Supported 00:09:48.247 Endurance Groups: Supported 00:09:48.247 Predictable Latency Mode: Not Supported 00:09:48.247 Traffic Based Keep ALive: Not Supported 00:09:48.247 Namespace Granularity: Not Supported 00:09:48.247 SQ Associations: Not Supported 00:09:48.247 UUID List: Not Supported 00:09:48.247 Multi-Domain Subsystem: Not Supported 00:09:48.247 Fixed Capacity Management: Not Supported 00:09:48.247 Variable Capacity Management: Not Supported 00:09:48.247 Delete Endurance Group: Not Supported 00:09:48.247 Delete NVM Set: Not Supported 00:09:48.247 Extended LBA Formats Supported: Supported 00:09:48.247 Flexible Data Placement Supported: Supported 00:09:48.247 00:09:48.247 Controller Memory Buffer Support 00:09:48.247 ================================ 00:09:48.247 Supported: No 00:09:48.247 00:09:48.247 Persistent Memory Region Support 00:09:48.247 ================================ 00:09:48.247 Supported: No 00:09:48.247 00:09:48.247 Admin Command Set Attributes 00:09:48.247 ============================ 00:09:48.247 Security Send/Receive: Not Supported 00:09:48.247 Format NVM: Supported 00:09:48.247 Firmware Activate/Download: Not Supported 00:09:48.247 Namespace Management: Supported 00:09:48.247 Device Self-Test: Not Supported 00:09:48.247 Directives: Supported 00:09:48.247 NVMe-MI: Not Supported 00:09:48.247 Virtualization Management: Not Supported 00:09:48.247 Doorbell Buffer Config: Supported 00:09:48.247 Get LBA Status Capability: Not Supported 00:09:48.247 Command & Feature Lockdown Capability: Not Supported 00:09:48.247 Abort Command Limit: 4 00:09:48.247 Async Event Request Limit: 4 00:09:48.247 Number of Firmware Slots: N/A 00:09:48.247 Firmware Slot 1 Read-Only: N/A 00:09:48.247 Firmware Activation Without Reset: N/A 00:09:48.247 Multiple Update Detection Support: N/A 00:09:48.247 Firmware Update Granularity: No Information Provided 00:09:48.247 Per-Namespace SMART Log: Yes 00:09:48.247 Asymmetric Namespace Access Log Page: Not Supported 00:09:48.247 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:48.247 Command Effects Log Page: Supported 00:09:48.247 Get Log Page Extended Data: Supported 00:09:48.247 Telemetry Log Pages: Not Supported 00:09:48.247 Persistent Event Log Pages: Not Supported 00:09:48.247 Supported Log Pages Log Page: May Support 00:09:48.247 Commands Supported & Effects Log Page: Not Supported 00:09:48.248 Feature Identifiers & Effects Log Page:May Support 00:09:48.248 NVMe-MI Commands & Effects Log Page: May Support 00:09:48.248 Data Area 4 for Telemetry Log: Not Supported 00:09:48.248 Error Log Page Entries Supported: 1 00:09:48.248 Keep Alive: Not Supported 00:09:48.248 00:09:48.248 NVM Command Set Attributes 00:09:48.248 ========================== 00:09:48.248 Submission Queue Entry Size 00:09:48.248 Max: 64 00:09:48.248 Min: 64 00:09:48.248 Completion Queue Entry Size 00:09:48.248 Max: 16 00:09:48.248 Min: 16 00:09:48.248 Number of Namespaces: 256 00:09:48.248 Compare Command: Supported 00:09:48.248 Write Uncorrectable Command: Not Supported 00:09:48.248 Dataset Management Command: Supported 00:09:48.248 Write Zeroes Command: Supported 00:09:48.248 Set Features Save Field: Supported 00:09:48.248 Reservations: Not Supported 00:09:48.248 Timestamp: Supported 00:09:48.248 Copy: Supported 00:09:48.248 Volatile Write Cache: Present 00:09:48.248 Atomic Write Unit (Normal): 1 00:09:48.248 Atomic Write Unit (PFail): 1 00:09:48.248 Atomic Compare & Write Unit: 1 00:09:48.248 Fused Compare & Write: Not Supported 00:09:48.248 Scatter-Gather List 00:09:48.248 SGL Command Set: Supported 00:09:48.248 SGL Keyed: Not Supported 00:09:48.248 SGL Bit Bucket Descriptor: Not Supported 00:09:48.248 SGL Metadata Pointer: Not Supported 00:09:48.248 Oversized SGL: Not Supported 00:09:48.248 SGL Metadata Address: Not Supported 00:09:48.248 SGL Offset: Not Supported 00:09:48.248 Transport SGL Data Block: Not Supported 00:09:48.248 Replay Protected Memory Block: Not Supported 00:09:48.248 00:09:48.248 Firmware Slot Information 00:09:48.248 ========================= 00:09:48.248 Active slot: 1 00:09:48.248 Slot 1 Firmware Revision: 1.0 00:09:48.248 00:09:48.248 00:09:48.248 Commands Supported and Effects 00:09:48.248 ============================== 00:09:48.248 Admin Commands 00:09:48.248 -------------- 00:09:48.248 Delete I/O Submission Queue (00h): Supported 00:09:48.248 Create I/O Submission Queue (01h): Supported 00:09:48.248 Get Log Page (02h): Supported 00:09:48.248 Delete I/O Completion Queue (04h): Supported 00:09:48.248 Create I/O Completion Queue (05h): Supported 00:09:48.248 Identify (06h): Supported 00:09:48.248 Abort (08h): Supported 00:09:48.248 Set Features (09h): Supported 00:09:48.248 Get Features (0Ah): Supported 00:09:48.248 Asynchronous Event Request (0Ch): Supported 00:09:48.248 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:48.248 Directive Send (19h): Supported 00:09:48.248 Directive Receive (1Ah): Supported 00:09:48.248 Virtualization Management (1Ch): Supported 00:09:48.248 Doorbell Buffer Config (7Ch): Supported 00:09:48.248 Format NVM (80h): Supported LBA-Change 00:09:48.248 I/O Commands 00:09:48.248 ------------ 00:09:48.248 Flush (00h): Supported LBA-Change 00:09:48.248 Write (01h): Supported LBA-Change 00:09:48.248 Read (02h): Supported 00:09:48.248 Compare (05h): Supported 00:09:48.248 Write Zeroes (08h): Supported LBA-Change 00:09:48.248 Dataset Management (09h): Supported LBA-Change 00:09:48.248 Unknown (0Ch): Supported 00:09:48.248 Unknown (12h): Supported 00:09:48.248 Copy (19h): Supported LBA-Change 00:09:48.248 Unknown (1Dh): Supported LBA-Change 00:09:48.248 00:09:48.248 Error Log 00:09:48.248 ========= 00:09:48.248 00:09:48.248 Arbitration 00:09:48.248 =========== 00:09:48.248 Arbitration Burst: no limit 00:09:48.248 00:09:48.248 Power Management 00:09:48.248 ================ 00:09:48.248 Number of Power States: 1 00:09:48.248 Current Power State: Power State #0 00:09:48.248 Power State #0: 00:09:48.248 Max Power: 25.00 W 00:09:48.248 Non-Operational State: Operational 00:09:48.248 Entry Latency: 16 microseconds 00:09:48.248 Exit Latency: 4 microseconds 00:09:48.248 Relative Read Throughput: 0 00:09:48.248 Relative Read Latency: 0 00:09:48.248 Relative Write Throughput: 0 00:09:48.248 Relative Write Latency: 0 00:09:48.248 Idle Power: Not Reported 00:09:48.248 Active Power: Not Reported 00:09:48.248 Non-Operational Permissive Mode: Not Supported 00:09:48.248 00:09:48.248 Health Information 00:09:48.248 ================== 00:09:48.248 Critical Warnings: 00:09:48.248 Available Spare Space: OK 00:09:48.248 Temperature: OK 00:09:48.248 Device Reliability: OK 00:09:48.248 Read Only: No 00:09:48.248 Volatile Memory Backup: OK 00:09:48.248 Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.248 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:48.248 Available Spare: 0% 00:09:48.248 Available Spare Threshold: 0% 00:09:48.248 Life Percentage Used: 0% 00:09:48.248 Data Units Read: 943 00:09:48.248 Data Units Written: 872 00:09:48.248 Host Read Commands: 34862 00:09:48.248 Host Write Commands: 34285 00:09:48.248 Controller Busy Time: 0 minutes 00:09:48.248 Power Cycles: 0 00:09:48.248 Power On Hours: 0 hours 00:09:48.248 Unsafe Shutdowns: 0 00:09:48.248 Unrecoverable Media Errors: 0 00:09:48.248 Lifetime Error Log Entries: 0 00:09:48.248 Warning Temperature Time: 0 minutes 00:09:48.248 Critical Temperature Time: 0 minutes 00:09:48.248 00:09:48.248 Number of Queues 00:09:48.248 ================ 00:09:48.248 Number of I/O Submission Queues: 64 00:09:48.248 Number of I/O Completion Queues: 64 00:09:48.248 00:09:48.248 ZNS Specific Controller Data 00:09:48.248 ============================ 00:09:48.248 Zone Append Size Limit: 0 00:09:48.248 00:09:48.248 00:09:48.248 Active Namespaces 00:09:48.248 ================= 00:09:48.248 Namespace ID:1 00:09:48.248 Error Recovery Timeout: Unlimited 00:09:48.248 Command Set Identifier: NVM (00h) 00:09:48.248 Deallocate: Supported 00:09:48.248 Deallocated/Unwritten Error: Supported 00:09:48.248 Deallocated Read Value: All 0x00 00:09:48.248 Deallocate in Write Zeroes: Not Supported 00:09:48.248 Deallocated Guard Field: 0xFFFF 00:09:48.248 Flush: Supported 00:09:48.248 Reservation: Not Supported 00:09:48.248 Namespace Sharing Capabilities: Multiple Controllers 00:09:48.248 Size (in LBAs): 262144 (1GiB) 00:09:48.248 Capacity (in LBAs): 262144 (1GiB) 00:09:48.248 Utilization (in LBAs): 262144 (1GiB) 00:09:48.248 Thin Provisioning: Not Supported 00:09:48.248 Per-NS Atomic Units: No 00:09:48.248 Maximum Single Source Range Length: 128 00:09:48.248 Maximum Copy Length: 128 00:09:48.248 Maximum Source Range Count: 128 00:09:48.248 NGUID/EUI64 Never Reused: No 00:09:48.248 Namespace Write Protected: No 00:09:48.248 Endurance group ID: 1 00:09:48.248 Number of LBA Formats: 8 00:09:48.248 Current LBA Format: LBA Format #04 00:09:48.248 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:48.248 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:48.248 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:48.248 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:48.248 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:48.248 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:48.248 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:48.248 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:48.248 00:09:48.248 Get Feature FDP: 00:09:48.248 ================ 00:09:48.248 Enabled: Yes 00:09:48.248 FDP configuration index: 0 00:09:48.248 00:09:48.248 FDP configurations log page 00:09:48.248 =========================== 00:09:48.248 Number of FDP configurations: 1 00:09:48.248 Version: 0 00:09:48.248 Size: 112 00:09:48.248 FDP Configuration Descriptor: 0 00:09:48.248 Descriptor Size: 96 00:09:48.248 Reclaim Group Identifier format: 2 00:09:48.248 FDP Volatile Write Cache: Not Present 00:09:48.248 FDP Configuration: Valid 00:09:48.248 Vendor Specific Size: 0 00:09:48.248 Number of Reclaim Groups: 2 00:09:48.248 Number of Recalim Unit Handles: 8 00:09:48.248 Max Placement Identifiers: 128 00:09:48.248 Number of Namespaces Suppprted: 256 00:09:48.248 Reclaim unit Nominal Size: 6000000 bytes 00:09:48.248 Estimated Reclaim Unit Time Limit: Not Reported 00:09:48.248 RUH Desc #000: RUH Type: Initially Isolated 00:09:48.248 RUH Desc #001: RUH Type: Initially Isolated 00:09:48.248 RUH Desc #002: RUH Type: Initially Isolated 00:09:48.248 RUH Desc #003: RUH Type: Initially Isolated 00:09:48.248 RUH Desc #004: RUH Type: Initially Isolated 00:09:48.248 RUH Desc #005: RUH Type: Initially Isolated 00:09:48.248 RUH Desc #006: RUH Type: Initially Isolated 00:09:48.248 RUH Desc #007: RUH Type: Initially Isolated 00:09:48.248 00:09:48.248 FDP reclaim unit handle usage log page 00:09:48.248 ====================================== 00:09:48.248 Number of Reclaim Unit Handles: 8 00:09:48.249 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:48.249 RUH Usage Desc #001: RUH Attributes: Unused 00:09:48.249 RUH Usage Desc #002: RUH Attributes: Unused 00:09:48.249 RUH Usage Desc #003: RUH Attributes: Unused 00:09:48.249 RUH Usage Desc #004: RUH Attributes: Unused 00:09:48.249 RUH Usage Desc #005: RUH Attributes: Unused 00:09:48.249 RUH Usage Desc #006: RUH Attributes: Unused 00:09:48.249 RUH Usage Desc #007: RUH Attributes: Unused 00:09:48.249 00:09:48.249 FDP statistics log page 00:09:48.249 ======================= 00:09:48.249 Host bytes with metadata written: 554606592 00:09:48.249 Medi[2024-11-05 03:20:11.721642] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64239 terminated unexpected 00:09:48.249 a bytes with metadata written: 554684416 00:09:48.249 Media bytes erased: 0 00:09:48.249 00:09:48.249 FDP events log page 00:09:48.249 =================== 00:09:48.249 Number of FDP events: 0 00:09:48.249 00:09:48.249 NVM Specific Namespace Data 00:09:48.249 =========================== 00:09:48.249 Logical Block Storage Tag Mask: 0 00:09:48.249 Protection Information Capabilities: 00:09:48.249 16b Guard Protection Information Storage Tag Support: No 00:09:48.249 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:48.249 Storage Tag Check Read Support: No 00:09:48.249 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.249 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.249 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.249 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.249 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.249 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.249 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.249 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.249 ===================================================== 00:09:48.249 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:48.249 ===================================================== 00:09:48.249 Controller Capabilities/Features 00:09:48.249 ================================ 00:09:48.249 Vendor ID: 1b36 00:09:48.249 Subsystem Vendor ID: 1af4 00:09:48.249 Serial Number: 12342 00:09:48.249 Model Number: QEMU NVMe Ctrl 00:09:48.249 Firmware Version: 8.0.0 00:09:48.249 Recommended Arb Burst: 6 00:09:48.249 IEEE OUI Identifier: 00 54 52 00:09:48.249 Multi-path I/O 00:09:48.249 May have multiple subsystem ports: No 00:09:48.249 May have multiple controllers: No 00:09:48.249 Associated with SR-IOV VF: No 00:09:48.249 Max Data Transfer Size: 524288 00:09:48.249 Max Number of Namespaces: 256 00:09:48.249 Max Number of I/O Queues: 64 00:09:48.249 NVMe Specification Version (VS): 1.4 00:09:48.249 NVMe Specification Version (Identify): 1.4 00:09:48.249 Maximum Queue Entries: 2048 00:09:48.249 Contiguous Queues Required: Yes 00:09:48.249 Arbitration Mechanisms Supported 00:09:48.249 Weighted Round Robin: Not Supported 00:09:48.249 Vendor Specific: Not Supported 00:09:48.249 Reset Timeout: 7500 ms 00:09:48.249 Doorbell Stride: 4 bytes 00:09:48.249 NVM Subsystem Reset: Not Supported 00:09:48.249 Command Sets Supported 00:09:48.249 NVM Command Set: Supported 00:09:48.249 Boot Partition: Not Supported 00:09:48.249 Memory Page Size Minimum: 4096 bytes 00:09:48.249 Memory Page Size Maximum: 65536 bytes 00:09:48.249 Persistent Memory Region: Not Supported 00:09:48.249 Optional Asynchronous Events Supported 00:09:48.249 Namespace Attribute Notices: Supported 00:09:48.249 Firmware Activation Notices: Not Supported 00:09:48.249 ANA Change Notices: Not Supported 00:09:48.249 PLE Aggregate Log Change Notices: Not Supported 00:09:48.249 LBA Status Info Alert Notices: Not Supported 00:09:48.249 EGE Aggregate Log Change Notices: Not Supported 00:09:48.249 Normal NVM Subsystem Shutdown event: Not Supported 00:09:48.249 Zone Descriptor Change Notices: Not Supported 00:09:48.249 Discovery Log Change Notices: Not Supported 00:09:48.249 Controller Attributes 00:09:48.249 128-bit Host Identifier: Not Supported 00:09:48.249 Non-Operational Permissive Mode: Not Supported 00:09:48.249 NVM Sets: Not Supported 00:09:48.249 Read Recovery Levels: Not Supported 00:09:48.249 Endurance Groups: Not Supported 00:09:48.249 Predictable Latency Mode: Not Supported 00:09:48.249 Traffic Based Keep ALive: Not Supported 00:09:48.249 Namespace Granularity: Not Supported 00:09:48.249 SQ Associations: Not Supported 00:09:48.249 UUID List: Not Supported 00:09:48.249 Multi-Domain Subsystem: Not Supported 00:09:48.249 Fixed Capacity Management: Not Supported 00:09:48.249 Variable Capacity Management: Not Supported 00:09:48.249 Delete Endurance Group: Not Supported 00:09:48.249 Delete NVM Set: Not Supported 00:09:48.249 Extended LBA Formats Supported: Supported 00:09:48.249 Flexible Data Placement Supported: Not Supported 00:09:48.249 00:09:48.249 Controller Memory Buffer Support 00:09:48.249 ================================ 00:09:48.249 Supported: No 00:09:48.249 00:09:48.249 Persistent Memory Region Support 00:09:48.249 ================================ 00:09:48.249 Supported: No 00:09:48.249 00:09:48.249 Admin Command Set Attributes 00:09:48.249 ============================ 00:09:48.249 Security Send/Receive: Not Supported 00:09:48.249 Format NVM: Supported 00:09:48.249 Firmware Activate/Download: Not Supported 00:09:48.249 Namespace Management: Supported 00:09:48.249 Device Self-Test: Not Supported 00:09:48.249 Directives: Supported 00:09:48.249 NVMe-MI: Not Supported 00:09:48.249 Virtualization Management: Not Supported 00:09:48.249 Doorbell Buffer Config: Supported 00:09:48.249 Get LBA Status Capability: Not Supported 00:09:48.249 Command & Feature Lockdown Capability: Not Supported 00:09:48.249 Abort Command Limit: 4 00:09:48.249 Async Event Request Limit: 4 00:09:48.249 Number of Firmware Slots: N/A 00:09:48.249 Firmware Slot 1 Read-Only: N/A 00:09:48.249 Firmware Activation Without Reset: N/A 00:09:48.249 Multiple Update Detection Support: N/A 00:09:48.249 Firmware Update Granularity: No Information Provided 00:09:48.249 Per-Namespace SMART Log: Yes 00:09:48.249 Asymmetric Namespace Access Log Page: Not Supported 00:09:48.249 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:48.249 Command Effects Log Page: Supported 00:09:48.249 Get Log Page Extended Data: Supported 00:09:48.249 Telemetry Log Pages: Not Supported 00:09:48.249 Persistent Event Log Pages: Not Supported 00:09:48.249 Supported Log Pages Log Page: May Support 00:09:48.249 Commands Supported & Effects Log Page: Not Supported 00:09:48.249 Feature Identifiers & Effects Log Page:May Support 00:09:48.249 NVMe-MI Commands & Effects Log Page: May Support 00:09:48.249 Data Area 4 for Telemetry Log: Not Supported 00:09:48.249 Error Log Page Entries Supported: 1 00:09:48.249 Keep Alive: Not Supported 00:09:48.249 00:09:48.249 NVM Command Set Attributes 00:09:48.249 ========================== 00:09:48.249 Submission Queue Entry Size 00:09:48.249 Max: 64 00:09:48.249 Min: 64 00:09:48.249 Completion Queue Entry Size 00:09:48.249 Max: 16 00:09:48.249 Min: 16 00:09:48.249 Number of Namespaces: 256 00:09:48.249 Compare Command: Supported 00:09:48.249 Write Uncorrectable Command: Not Supported 00:09:48.249 Dataset Management Command: Supported 00:09:48.249 Write Zeroes Command: Supported 00:09:48.249 Set Features Save Field: Supported 00:09:48.249 Reservations: Not Supported 00:09:48.249 Timestamp: Supported 00:09:48.249 Copy: Supported 00:09:48.249 Volatile Write Cache: Present 00:09:48.250 Atomic Write Unit (Normal): 1 00:09:48.250 Atomic Write Unit (PFail): 1 00:09:48.250 Atomic Compare & Write Unit: 1 00:09:48.250 Fused Compare & Write: Not Supported 00:09:48.250 Scatter-Gather List 00:09:48.250 SGL Command Set: Supported 00:09:48.250 SGL Keyed: Not Supported 00:09:48.250 SGL Bit Bucket Descriptor: Not Supported 00:09:48.250 SGL Metadata Pointer: Not Supported 00:09:48.250 Oversized SGL: Not Supported 00:09:48.250 SGL Metadata Address: Not Supported 00:09:48.250 SGL Offset: Not Supported 00:09:48.250 Transport SGL Data Block: Not Supported 00:09:48.250 Replay Protected Memory Block: Not Supported 00:09:48.250 00:09:48.250 Firmware Slot Information 00:09:48.250 ========================= 00:09:48.250 Active slot: 1 00:09:48.250 Slot 1 Firmware Revision: 1.0 00:09:48.250 00:09:48.250 00:09:48.250 Commands Supported and Effects 00:09:48.250 ============================== 00:09:48.250 Admin Commands 00:09:48.250 -------------- 00:09:48.250 Delete I/O Submission Queue (00h): Supported 00:09:48.250 Create I/O Submission Queue (01h): Supported 00:09:48.250 Get Log Page (02h): Supported 00:09:48.250 Delete I/O Completion Queue (04h): Supported 00:09:48.250 Create I/O Completion Queue (05h): Supported 00:09:48.250 Identify (06h): Supported 00:09:48.250 Abort (08h): Supported 00:09:48.250 Set Features (09h): Supported 00:09:48.250 Get Features (0Ah): Supported 00:09:48.250 Asynchronous Event Request (0Ch): Supported 00:09:48.250 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:48.250 Directive Send (19h): Supported 00:09:48.250 Directive Receive (1Ah): Supported 00:09:48.250 Virtualization Management (1Ch): Supported 00:09:48.250 Doorbell Buffer Config (7Ch): Supported 00:09:48.250 Format NVM (80h): Supported LBA-Change 00:09:48.250 I/O Commands 00:09:48.250 ------------ 00:09:48.250 Flush (00h): Supported LBA-Change 00:09:48.250 Write (01h): Supported LBA-Change 00:09:48.250 Read (02h): Supported 00:09:48.250 Compare (05h): Supported 00:09:48.250 Write Zeroes (08h): Supported LBA-Change 00:09:48.250 Dataset Management (09h): Supported LBA-Change 00:09:48.250 Unknown (0Ch): Supported 00:09:48.250 Unknown (12h): Supported 00:09:48.250 Copy (19h): Supported LBA-Change 00:09:48.250 Unknown (1Dh): Supported LBA-Change 00:09:48.250 00:09:48.250 Error Log 00:09:48.250 ========= 00:09:48.250 00:09:48.250 Arbitration 00:09:48.250 =========== 00:09:48.250 Arbitration Burst: no limit 00:09:48.250 00:09:48.250 Power Management 00:09:48.250 ================ 00:09:48.250 Number of Power States: 1 00:09:48.250 Current Power State: Power State #0 00:09:48.250 Power State #0: 00:09:48.250 Max Power: 25.00 W 00:09:48.250 Non-Operational State: Operational 00:09:48.250 Entry Latency: 16 microseconds 00:09:48.250 Exit Latency: 4 microseconds 00:09:48.250 Relative Read Throughput: 0 00:09:48.250 Relative Read Latency: 0 00:09:48.250 Relative Write Throughput: 0 00:09:48.250 Relative Write Latency: 0 00:09:48.250 Idle Power: Not Reported 00:09:48.250 Active Power: Not Reported 00:09:48.250 Non-Operational Permissive Mode: Not Supported 00:09:48.250 00:09:48.250 Health Information 00:09:48.250 ================== 00:09:48.250 Critical Warnings: 00:09:48.250 Available Spare Space: OK 00:09:48.250 Temperature: OK 00:09:48.250 Device Reliability: OK 00:09:48.250 Read Only: No 00:09:48.250 Volatile Memory Backup: OK 00:09:48.250 Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.250 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:48.250 Available Spare: 0% 00:09:48.250 Available Spare Threshold: 0% 00:09:48.250 Life Percentage Used: 0% 00:09:48.250 Data Units Read: 2428 00:09:48.250 Data Units Written: 2215 00:09:48.250 Host Read Commands: 101468 00:09:48.250 Host Write Commands: 99737 00:09:48.250 Controller Busy Time: 0 minutes 00:09:48.250 Power Cycles: 0 00:09:48.250 Power On Hours: 0 hours 00:09:48.250 Unsafe Shutdowns: 0 00:09:48.250 Unrecoverable Media Errors: 0 00:09:48.250 Lifetime Error Log Entries: 0 00:09:48.250 Warning Temperature Time: 0 minutes 00:09:48.250 Critical Temperature Time: 0 minutes 00:09:48.250 00:09:48.250 Number of Queues 00:09:48.250 ================ 00:09:48.250 Number of I/O Submission Queues: 64 00:09:48.250 Number of I/O Completion Queues: 64 00:09:48.250 00:09:48.250 ZNS Specific Controller Data 00:09:48.250 ============================ 00:09:48.250 Zone Append Size Limit: 0 00:09:48.250 00:09:48.250 00:09:48.250 Active Namespaces 00:09:48.250 ================= 00:09:48.250 Namespace ID:1 00:09:48.250 Error Recovery Timeout: Unlimited 00:09:48.250 Command Set Identifier: NVM (00h) 00:09:48.250 Deallocate: Supported 00:09:48.250 Deallocated/Unwritten Error: Supported 00:09:48.250 Deallocated Read Value: All 0x00 00:09:48.250 Deallocate in Write Zeroes: Not Supported 00:09:48.250 Deallocated Guard Field: 0xFFFF 00:09:48.250 Flush: Supported 00:09:48.250 Reservation: Not Supported 00:09:48.250 Namespace Sharing Capabilities: Private 00:09:48.250 Size (in LBAs): 1048576 (4GiB) 00:09:48.250 Capacity (in LBAs): 1048576 (4GiB) 00:09:48.250 Utilization (in LBAs): 1048576 (4GiB) 00:09:48.250 Thin Provisioning: Not Supported 00:09:48.250 Per-NS Atomic Units: No 00:09:48.250 Maximum Single Source Range Length: 128 00:09:48.250 Maximum Copy Length: 128 00:09:48.250 Maximum Source Range Count: 128 00:09:48.250 NGUID/EUI64 Never Reused: No 00:09:48.250 Namespace Write Protected: No 00:09:48.250 Number of LBA Formats: 8 00:09:48.250 Current LBA Format: LBA Format #04 00:09:48.250 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:48.250 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:48.250 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:48.250 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:48.250 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:48.250 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:48.250 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:48.250 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:48.250 00:09:48.250 NVM Specific Namespace Data 00:09:48.250 =========================== 00:09:48.250 Logical Block Storage Tag Mask: 0 00:09:48.250 Protection Information Capabilities: 00:09:48.250 16b Guard Protection Information Storage Tag Support: No 00:09:48.250 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:48.250 Storage Tag Check Read Support: No 00:09:48.250 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.250 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.250 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.250 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.250 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.250 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.250 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.250 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.250 Namespace ID:2 00:09:48.250 Error Recovery Timeout: Unlimited 00:09:48.250 Command Set Identifier: NVM (00h) 00:09:48.250 Deallocate: Supported 00:09:48.250 Deallocated/Unwritten Error: Supported 00:09:48.250 Deallocated Read Value: All 0x00 00:09:48.250 Deallocate in Write Zeroes: Not Supported 00:09:48.250 Deallocated Guard Field: 0xFFFF 00:09:48.250 Flush: Supported 00:09:48.250 Reservation: Not Supported 00:09:48.250 Namespace Sharing Capabilities: Private 00:09:48.250 Size (in LBAs): 1048576 (4GiB) 00:09:48.250 Capacity (in LBAs): 1048576 (4GiB) 00:09:48.250 Utilization (in LBAs): 1048576 (4GiB) 00:09:48.250 Thin Provisioning: Not Supported 00:09:48.250 Per-NS Atomic Units: No 00:09:48.250 Maximum Single Source Range Length: 128 00:09:48.250 Maximum Copy Length: 128 00:09:48.250 Maximum Source Range Count: 128 00:09:48.250 NGUID/EUI64 Never Reused: No 00:09:48.250 Namespace Write Protected: No 00:09:48.250 Number of LBA Formats: 8 00:09:48.250 Current LBA Format: LBA Format #04 00:09:48.250 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:48.250 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:48.250 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:48.250 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:48.250 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:48.250 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:48.250 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:48.250 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:48.250 00:09:48.250 NVM Specific Namespace Data 00:09:48.250 =========================== 00:09:48.250 Logical Block Storage Tag Mask: 0 00:09:48.250 Protection Information Capabilities: 00:09:48.250 16b Guard Protection Information Storage Tag Support: No 00:09:48.250 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:48.250 Storage Tag Check Read Support: No 00:09:48.251 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Namespace ID:3 00:09:48.251 Error Recovery Timeout: Unlimited 00:09:48.251 Command Set Identifier: NVM (00h) 00:09:48.251 Deallocate: Supported 00:09:48.251 Deallocated/Unwritten Error: Supported 00:09:48.251 Deallocated Read Value: All 0x00 00:09:48.251 Deallocate in Write Zeroes: Not Supported 00:09:48.251 Deallocated Guard Field: 0xFFFF 00:09:48.251 Flush: Supported 00:09:48.251 Reservation: Not Supported 00:09:48.251 Namespace Sharing Capabilities: Private 00:09:48.251 Size (in LBAs): 1048576 (4GiB) 00:09:48.251 Capacity (in LBAs): 1048576 (4GiB) 00:09:48.251 Utilization (in LBAs): 1048576 (4GiB) 00:09:48.251 Thin Provisioning: Not Supported 00:09:48.251 Per-NS Atomic Units: No 00:09:48.251 Maximum Single Source Range Length: 128 00:09:48.251 Maximum Copy Length: 128 00:09:48.251 Maximum Source Range Count: 128 00:09:48.251 NGUID/EUI64 Never Reused: No 00:09:48.251 Namespace Write Protected: No 00:09:48.251 Number of LBA Formats: 8 00:09:48.251 Current LBA Format: LBA Format #04 00:09:48.251 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:48.251 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:48.251 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:48.251 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:48.251 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:48.251 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:48.251 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:48.251 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:48.251 00:09:48.251 NVM Specific Namespace Data 00:09:48.251 =========================== 00:09:48.251 Logical Block Storage Tag Mask: 0 00:09:48.251 Protection Information Capabilities: 00:09:48.251 16b Guard Protection Information Storage Tag Support: No 00:09:48.251 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:48.251 Storage Tag Check Read Support: No 00:09:48.251 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.251 03:20:11 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:48.251 03:20:11 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:48.510 ===================================================== 00:09:48.510 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:48.510 ===================================================== 00:09:48.510 Controller Capabilities/Features 00:09:48.510 ================================ 00:09:48.510 Vendor ID: 1b36 00:09:48.510 Subsystem Vendor ID: 1af4 00:09:48.510 Serial Number: 12340 00:09:48.510 Model Number: QEMU NVMe Ctrl 00:09:48.510 Firmware Version: 8.0.0 00:09:48.510 Recommended Arb Burst: 6 00:09:48.510 IEEE OUI Identifier: 00 54 52 00:09:48.510 Multi-path I/O 00:09:48.510 May have multiple subsystem ports: No 00:09:48.510 May have multiple controllers: No 00:09:48.510 Associated with SR-IOV VF: No 00:09:48.510 Max Data Transfer Size: 524288 00:09:48.510 Max Number of Namespaces: 256 00:09:48.510 Max Number of I/O Queues: 64 00:09:48.510 NVMe Specification Version (VS): 1.4 00:09:48.510 NVMe Specification Version (Identify): 1.4 00:09:48.510 Maximum Queue Entries: 2048 00:09:48.510 Contiguous Queues Required: Yes 00:09:48.510 Arbitration Mechanisms Supported 00:09:48.510 Weighted Round Robin: Not Supported 00:09:48.510 Vendor Specific: Not Supported 00:09:48.510 Reset Timeout: 7500 ms 00:09:48.510 Doorbell Stride: 4 bytes 00:09:48.510 NVM Subsystem Reset: Not Supported 00:09:48.510 Command Sets Supported 00:09:48.510 NVM Command Set: Supported 00:09:48.510 Boot Partition: Not Supported 00:09:48.510 Memory Page Size Minimum: 4096 bytes 00:09:48.510 Memory Page Size Maximum: 65536 bytes 00:09:48.510 Persistent Memory Region: Not Supported 00:09:48.510 Optional Asynchronous Events Supported 00:09:48.510 Namespace Attribute Notices: Supported 00:09:48.510 Firmware Activation Notices: Not Supported 00:09:48.510 ANA Change Notices: Not Supported 00:09:48.510 PLE Aggregate Log Change Notices: Not Supported 00:09:48.510 LBA Status Info Alert Notices: Not Supported 00:09:48.510 EGE Aggregate Log Change Notices: Not Supported 00:09:48.510 Normal NVM Subsystem Shutdown event: Not Supported 00:09:48.510 Zone Descriptor Change Notices: Not Supported 00:09:48.510 Discovery Log Change Notices: Not Supported 00:09:48.510 Controller Attributes 00:09:48.510 128-bit Host Identifier: Not Supported 00:09:48.510 Non-Operational Permissive Mode: Not Supported 00:09:48.510 NVM Sets: Not Supported 00:09:48.510 Read Recovery Levels: Not Supported 00:09:48.510 Endurance Groups: Not Supported 00:09:48.510 Predictable Latency Mode: Not Supported 00:09:48.510 Traffic Based Keep ALive: Not Supported 00:09:48.510 Namespace Granularity: Not Supported 00:09:48.510 SQ Associations: Not Supported 00:09:48.511 UUID List: Not Supported 00:09:48.511 Multi-Domain Subsystem: Not Supported 00:09:48.511 Fixed Capacity Management: Not Supported 00:09:48.511 Variable Capacity Management: Not Supported 00:09:48.511 Delete Endurance Group: Not Supported 00:09:48.511 Delete NVM Set: Not Supported 00:09:48.511 Extended LBA Formats Supported: Supported 00:09:48.511 Flexible Data Placement Supported: Not Supported 00:09:48.511 00:09:48.511 Controller Memory Buffer Support 00:09:48.511 ================================ 00:09:48.511 Supported: No 00:09:48.511 00:09:48.511 Persistent Memory Region Support 00:09:48.511 ================================ 00:09:48.511 Supported: No 00:09:48.511 00:09:48.511 Admin Command Set Attributes 00:09:48.511 ============================ 00:09:48.511 Security Send/Receive: Not Supported 00:09:48.511 Format NVM: Supported 00:09:48.511 Firmware Activate/Download: Not Supported 00:09:48.511 Namespace Management: Supported 00:09:48.511 Device Self-Test: Not Supported 00:09:48.511 Directives: Supported 00:09:48.511 NVMe-MI: Not Supported 00:09:48.511 Virtualization Management: Not Supported 00:09:48.511 Doorbell Buffer Config: Supported 00:09:48.511 Get LBA Status Capability: Not Supported 00:09:48.511 Command & Feature Lockdown Capability: Not Supported 00:09:48.511 Abort Command Limit: 4 00:09:48.511 Async Event Request Limit: 4 00:09:48.511 Number of Firmware Slots: N/A 00:09:48.511 Firmware Slot 1 Read-Only: N/A 00:09:48.511 Firmware Activation Without Reset: N/A 00:09:48.511 Multiple Update Detection Support: N/A 00:09:48.511 Firmware Update Granularity: No Information Provided 00:09:48.511 Per-Namespace SMART Log: Yes 00:09:48.511 Asymmetric Namespace Access Log Page: Not Supported 00:09:48.511 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:48.511 Command Effects Log Page: Supported 00:09:48.511 Get Log Page Extended Data: Supported 00:09:48.511 Telemetry Log Pages: Not Supported 00:09:48.511 Persistent Event Log Pages: Not Supported 00:09:48.511 Supported Log Pages Log Page: May Support 00:09:48.511 Commands Supported & Effects Log Page: Not Supported 00:09:48.511 Feature Identifiers & Effects Log Page:May Support 00:09:48.511 NVMe-MI Commands & Effects Log Page: May Support 00:09:48.511 Data Area 4 for Telemetry Log: Not Supported 00:09:48.511 Error Log Page Entries Supported: 1 00:09:48.511 Keep Alive: Not Supported 00:09:48.511 00:09:48.511 NVM Command Set Attributes 00:09:48.511 ========================== 00:09:48.511 Submission Queue Entry Size 00:09:48.511 Max: 64 00:09:48.511 Min: 64 00:09:48.511 Completion Queue Entry Size 00:09:48.511 Max: 16 00:09:48.511 Min: 16 00:09:48.511 Number of Namespaces: 256 00:09:48.511 Compare Command: Supported 00:09:48.511 Write Uncorrectable Command: Not Supported 00:09:48.511 Dataset Management Command: Supported 00:09:48.511 Write Zeroes Command: Supported 00:09:48.511 Set Features Save Field: Supported 00:09:48.511 Reservations: Not Supported 00:09:48.511 Timestamp: Supported 00:09:48.511 Copy: Supported 00:09:48.511 Volatile Write Cache: Present 00:09:48.511 Atomic Write Unit (Normal): 1 00:09:48.511 Atomic Write Unit (PFail): 1 00:09:48.511 Atomic Compare & Write Unit: 1 00:09:48.511 Fused Compare & Write: Not Supported 00:09:48.511 Scatter-Gather List 00:09:48.511 SGL Command Set: Supported 00:09:48.511 SGL Keyed: Not Supported 00:09:48.511 SGL Bit Bucket Descriptor: Not Supported 00:09:48.511 SGL Metadata Pointer: Not Supported 00:09:48.511 Oversized SGL: Not Supported 00:09:48.511 SGL Metadata Address: Not Supported 00:09:48.511 SGL Offset: Not Supported 00:09:48.511 Transport SGL Data Block: Not Supported 00:09:48.511 Replay Protected Memory Block: Not Supported 00:09:48.511 00:09:48.511 Firmware Slot Information 00:09:48.511 ========================= 00:09:48.511 Active slot: 1 00:09:48.511 Slot 1 Firmware Revision: 1.0 00:09:48.511 00:09:48.511 00:09:48.511 Commands Supported and Effects 00:09:48.511 ============================== 00:09:48.511 Admin Commands 00:09:48.511 -------------- 00:09:48.511 Delete I/O Submission Queue (00h): Supported 00:09:48.511 Create I/O Submission Queue (01h): Supported 00:09:48.511 Get Log Page (02h): Supported 00:09:48.511 Delete I/O Completion Queue (04h): Supported 00:09:48.511 Create I/O Completion Queue (05h): Supported 00:09:48.511 Identify (06h): Supported 00:09:48.511 Abort (08h): Supported 00:09:48.511 Set Features (09h): Supported 00:09:48.511 Get Features (0Ah): Supported 00:09:48.511 Asynchronous Event Request (0Ch): Supported 00:09:48.511 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:48.511 Directive Send (19h): Supported 00:09:48.511 Directive Receive (1Ah): Supported 00:09:48.511 Virtualization Management (1Ch): Supported 00:09:48.511 Doorbell Buffer Config (7Ch): Supported 00:09:48.511 Format NVM (80h): Supported LBA-Change 00:09:48.511 I/O Commands 00:09:48.511 ------------ 00:09:48.511 Flush (00h): Supported LBA-Change 00:09:48.511 Write (01h): Supported LBA-Change 00:09:48.511 Read (02h): Supported 00:09:48.511 Compare (05h): Supported 00:09:48.511 Write Zeroes (08h): Supported LBA-Change 00:09:48.511 Dataset Management (09h): Supported LBA-Change 00:09:48.511 Unknown (0Ch): Supported 00:09:48.511 Unknown (12h): Supported 00:09:48.511 Copy (19h): Supported LBA-Change 00:09:48.511 Unknown (1Dh): Supported LBA-Change 00:09:48.511 00:09:48.511 Error Log 00:09:48.511 ========= 00:09:48.511 00:09:48.511 Arbitration 00:09:48.511 =========== 00:09:48.511 Arbitration Burst: no limit 00:09:48.511 00:09:48.511 Power Management 00:09:48.511 ================ 00:09:48.511 Number of Power States: 1 00:09:48.511 Current Power State: Power State #0 00:09:48.511 Power State #0: 00:09:48.511 Max Power: 25.00 W 00:09:48.511 Non-Operational State: Operational 00:09:48.511 Entry Latency: 16 microseconds 00:09:48.511 Exit Latency: 4 microseconds 00:09:48.511 Relative Read Throughput: 0 00:09:48.511 Relative Read Latency: 0 00:09:48.511 Relative Write Throughput: 0 00:09:48.511 Relative Write Latency: 0 00:09:48.770 Idle Power: Not Reported 00:09:48.770 Active Power: Not Reported 00:09:48.770 Non-Operational Permissive Mode: Not Supported 00:09:48.770 00:09:48.770 Health Information 00:09:48.770 ================== 00:09:48.770 Critical Warnings: 00:09:48.770 Available Spare Space: OK 00:09:48.770 Temperature: OK 00:09:48.770 Device Reliability: OK 00:09:48.770 Read Only: No 00:09:48.770 Volatile Memory Backup: OK 00:09:48.770 Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.770 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:48.770 Available Spare: 0% 00:09:48.770 Available Spare Threshold: 0% 00:09:48.770 Life Percentage Used: 0% 00:09:48.770 Data Units Read: 738 00:09:48.771 Data Units Written: 666 00:09:48.771 Host Read Commands: 32965 00:09:48.771 Host Write Commands: 32751 00:09:48.771 Controller Busy Time: 0 minutes 00:09:48.771 Power Cycles: 0 00:09:48.771 Power On Hours: 0 hours 00:09:48.771 Unsafe Shutdowns: 0 00:09:48.771 Unrecoverable Media Errors: 0 00:09:48.771 Lifetime Error Log Entries: 0 00:09:48.771 Warning Temperature Time: 0 minutes 00:09:48.771 Critical Temperature Time: 0 minutes 00:09:48.771 00:09:48.771 Number of Queues 00:09:48.771 ================ 00:09:48.771 Number of I/O Submission Queues: 64 00:09:48.771 Number of I/O Completion Queues: 64 00:09:48.771 00:09:48.771 ZNS Specific Controller Data 00:09:48.771 ============================ 00:09:48.771 Zone Append Size Limit: 0 00:09:48.771 00:09:48.771 00:09:48.771 Active Namespaces 00:09:48.771 ================= 00:09:48.771 Namespace ID:1 00:09:48.771 Error Recovery Timeout: Unlimited 00:09:48.771 Command Set Identifier: NVM (00h) 00:09:48.771 Deallocate: Supported 00:09:48.771 Deallocated/Unwritten Error: Supported 00:09:48.771 Deallocated Read Value: All 0x00 00:09:48.771 Deallocate in Write Zeroes: Not Supported 00:09:48.771 Deallocated Guard Field: 0xFFFF 00:09:48.771 Flush: Supported 00:09:48.771 Reservation: Not Supported 00:09:48.771 Metadata Transferred as: Separate Metadata Buffer 00:09:48.771 Namespace Sharing Capabilities: Private 00:09:48.771 Size (in LBAs): 1548666 (5GiB) 00:09:48.771 Capacity (in LBAs): 1548666 (5GiB) 00:09:48.771 Utilization (in LBAs): 1548666 (5GiB) 00:09:48.771 Thin Provisioning: Not Supported 00:09:48.771 Per-NS Atomic Units: No 00:09:48.771 Maximum Single Source Range Length: 128 00:09:48.771 Maximum Copy Length: 128 00:09:48.771 Maximum Source Range Count: 128 00:09:48.771 NGUID/EUI64 Never Reused: No 00:09:48.771 Namespace Write Protected: No 00:09:48.771 Number of LBA Formats: 8 00:09:48.771 Current LBA Format: LBA Format #07 00:09:48.771 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:48.771 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:48.771 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:48.771 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:48.771 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:48.771 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:48.771 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:48.771 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:48.771 00:09:48.771 NVM Specific Namespace Data 00:09:48.771 =========================== 00:09:48.771 Logical Block Storage Tag Mask: 0 00:09:48.771 Protection Information Capabilities: 00:09:48.771 16b Guard Protection Information Storage Tag Support: No 00:09:48.771 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:48.771 Storage Tag Check Read Support: No 00:09:48.771 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.771 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.771 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.771 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.771 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.771 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.771 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.771 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:48.771 03:20:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:48.771 03:20:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:49.031 ===================================================== 00:09:49.031 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:49.031 ===================================================== 00:09:49.031 Controller Capabilities/Features 00:09:49.031 ================================ 00:09:49.031 Vendor ID: 1b36 00:09:49.031 Subsystem Vendor ID: 1af4 00:09:49.031 Serial Number: 12341 00:09:49.031 Model Number: QEMU NVMe Ctrl 00:09:49.031 Firmware Version: 8.0.0 00:09:49.031 Recommended Arb Burst: 6 00:09:49.031 IEEE OUI Identifier: 00 54 52 00:09:49.031 Multi-path I/O 00:09:49.031 May have multiple subsystem ports: No 00:09:49.031 May have multiple controllers: No 00:09:49.031 Associated with SR-IOV VF: No 00:09:49.031 Max Data Transfer Size: 524288 00:09:49.031 Max Number of Namespaces: 256 00:09:49.031 Max Number of I/O Queues: 64 00:09:49.031 NVMe Specification Version (VS): 1.4 00:09:49.031 NVMe Specification Version (Identify): 1.4 00:09:49.031 Maximum Queue Entries: 2048 00:09:49.031 Contiguous Queues Required: Yes 00:09:49.031 Arbitration Mechanisms Supported 00:09:49.031 Weighted Round Robin: Not Supported 00:09:49.031 Vendor Specific: Not Supported 00:09:49.031 Reset Timeout: 7500 ms 00:09:49.031 Doorbell Stride: 4 bytes 00:09:49.031 NVM Subsystem Reset: Not Supported 00:09:49.031 Command Sets Supported 00:09:49.031 NVM Command Set: Supported 00:09:49.031 Boot Partition: Not Supported 00:09:49.031 Memory Page Size Minimum: 4096 bytes 00:09:49.031 Memory Page Size Maximum: 65536 bytes 00:09:49.031 Persistent Memory Region: Not Supported 00:09:49.031 Optional Asynchronous Events Supported 00:09:49.031 Namespace Attribute Notices: Supported 00:09:49.031 Firmware Activation Notices: Not Supported 00:09:49.031 ANA Change Notices: Not Supported 00:09:49.031 PLE Aggregate Log Change Notices: Not Supported 00:09:49.031 LBA Status Info Alert Notices: Not Supported 00:09:49.031 EGE Aggregate Log Change Notices: Not Supported 00:09:49.031 Normal NVM Subsystem Shutdown event: Not Supported 00:09:49.031 Zone Descriptor Change Notices: Not Supported 00:09:49.031 Discovery Log Change Notices: Not Supported 00:09:49.031 Controller Attributes 00:09:49.031 128-bit Host Identifier: Not Supported 00:09:49.031 Non-Operational Permissive Mode: Not Supported 00:09:49.031 NVM Sets: Not Supported 00:09:49.031 Read Recovery Levels: Not Supported 00:09:49.031 Endurance Groups: Not Supported 00:09:49.031 Predictable Latency Mode: Not Supported 00:09:49.031 Traffic Based Keep ALive: Not Supported 00:09:49.031 Namespace Granularity: Not Supported 00:09:49.031 SQ Associations: Not Supported 00:09:49.031 UUID List: Not Supported 00:09:49.031 Multi-Domain Subsystem: Not Supported 00:09:49.031 Fixed Capacity Management: Not Supported 00:09:49.031 Variable Capacity Management: Not Supported 00:09:49.031 Delete Endurance Group: Not Supported 00:09:49.031 Delete NVM Set: Not Supported 00:09:49.031 Extended LBA Formats Supported: Supported 00:09:49.031 Flexible Data Placement Supported: Not Supported 00:09:49.031 00:09:49.031 Controller Memory Buffer Support 00:09:49.031 ================================ 00:09:49.031 Supported: No 00:09:49.031 00:09:49.031 Persistent Memory Region Support 00:09:49.031 ================================ 00:09:49.031 Supported: No 00:09:49.031 00:09:49.031 Admin Command Set Attributes 00:09:49.031 ============================ 00:09:49.031 Security Send/Receive: Not Supported 00:09:49.031 Format NVM: Supported 00:09:49.031 Firmware Activate/Download: Not Supported 00:09:49.031 Namespace Management: Supported 00:09:49.031 Device Self-Test: Not Supported 00:09:49.031 Directives: Supported 00:09:49.031 NVMe-MI: Not Supported 00:09:49.031 Virtualization Management: Not Supported 00:09:49.031 Doorbell Buffer Config: Supported 00:09:49.031 Get LBA Status Capability: Not Supported 00:09:49.031 Command & Feature Lockdown Capability: Not Supported 00:09:49.031 Abort Command Limit: 4 00:09:49.031 Async Event Request Limit: 4 00:09:49.031 Number of Firmware Slots: N/A 00:09:49.031 Firmware Slot 1 Read-Only: N/A 00:09:49.031 Firmware Activation Without Reset: N/A 00:09:49.031 Multiple Update Detection Support: N/A 00:09:49.031 Firmware Update Granularity: No Information Provided 00:09:49.031 Per-Namespace SMART Log: Yes 00:09:49.031 Asymmetric Namespace Access Log Page: Not Supported 00:09:49.031 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:49.031 Command Effects Log Page: Supported 00:09:49.031 Get Log Page Extended Data: Supported 00:09:49.031 Telemetry Log Pages: Not Supported 00:09:49.031 Persistent Event Log Pages: Not Supported 00:09:49.031 Supported Log Pages Log Page: May Support 00:09:49.031 Commands Supported & Effects Log Page: Not Supported 00:09:49.031 Feature Identifiers & Effects Log Page:May Support 00:09:49.031 NVMe-MI Commands & Effects Log Page: May Support 00:09:49.031 Data Area 4 for Telemetry Log: Not Supported 00:09:49.031 Error Log Page Entries Supported: 1 00:09:49.031 Keep Alive: Not Supported 00:09:49.031 00:09:49.031 NVM Command Set Attributes 00:09:49.031 ========================== 00:09:49.031 Submission Queue Entry Size 00:09:49.031 Max: 64 00:09:49.031 Min: 64 00:09:49.031 Completion Queue Entry Size 00:09:49.031 Max: 16 00:09:49.031 Min: 16 00:09:49.031 Number of Namespaces: 256 00:09:49.031 Compare Command: Supported 00:09:49.031 Write Uncorrectable Command: Not Supported 00:09:49.031 Dataset Management Command: Supported 00:09:49.031 Write Zeroes Command: Supported 00:09:49.031 Set Features Save Field: Supported 00:09:49.031 Reservations: Not Supported 00:09:49.031 Timestamp: Supported 00:09:49.031 Copy: Supported 00:09:49.031 Volatile Write Cache: Present 00:09:49.031 Atomic Write Unit (Normal): 1 00:09:49.031 Atomic Write Unit (PFail): 1 00:09:49.031 Atomic Compare & Write Unit: 1 00:09:49.031 Fused Compare & Write: Not Supported 00:09:49.031 Scatter-Gather List 00:09:49.031 SGL Command Set: Supported 00:09:49.031 SGL Keyed: Not Supported 00:09:49.031 SGL Bit Bucket Descriptor: Not Supported 00:09:49.031 SGL Metadata Pointer: Not Supported 00:09:49.031 Oversized SGL: Not Supported 00:09:49.031 SGL Metadata Address: Not Supported 00:09:49.031 SGL Offset: Not Supported 00:09:49.031 Transport SGL Data Block: Not Supported 00:09:49.031 Replay Protected Memory Block: Not Supported 00:09:49.031 00:09:49.031 Firmware Slot Information 00:09:49.032 ========================= 00:09:49.032 Active slot: 1 00:09:49.032 Slot 1 Firmware Revision: 1.0 00:09:49.032 00:09:49.032 00:09:49.032 Commands Supported and Effects 00:09:49.032 ============================== 00:09:49.032 Admin Commands 00:09:49.032 -------------- 00:09:49.032 Delete I/O Submission Queue (00h): Supported 00:09:49.032 Create I/O Submission Queue (01h): Supported 00:09:49.032 Get Log Page (02h): Supported 00:09:49.032 Delete I/O Completion Queue (04h): Supported 00:09:49.032 Create I/O Completion Queue (05h): Supported 00:09:49.032 Identify (06h): Supported 00:09:49.032 Abort (08h): Supported 00:09:49.032 Set Features (09h): Supported 00:09:49.032 Get Features (0Ah): Supported 00:09:49.032 Asynchronous Event Request (0Ch): Supported 00:09:49.032 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:49.032 Directive Send (19h): Supported 00:09:49.032 Directive Receive (1Ah): Supported 00:09:49.032 Virtualization Management (1Ch): Supported 00:09:49.032 Doorbell Buffer Config (7Ch): Supported 00:09:49.032 Format NVM (80h): Supported LBA-Change 00:09:49.032 I/O Commands 00:09:49.032 ------------ 00:09:49.032 Flush (00h): Supported LBA-Change 00:09:49.032 Write (01h): Supported LBA-Change 00:09:49.032 Read (02h): Supported 00:09:49.032 Compare (05h): Supported 00:09:49.032 Write Zeroes (08h): Supported LBA-Change 00:09:49.032 Dataset Management (09h): Supported LBA-Change 00:09:49.032 Unknown (0Ch): Supported 00:09:49.032 Unknown (12h): Supported 00:09:49.032 Copy (19h): Supported LBA-Change 00:09:49.032 Unknown (1Dh): Supported LBA-Change 00:09:49.032 00:09:49.032 Error Log 00:09:49.032 ========= 00:09:49.032 00:09:49.032 Arbitration 00:09:49.032 =========== 00:09:49.032 Arbitration Burst: no limit 00:09:49.032 00:09:49.032 Power Management 00:09:49.032 ================ 00:09:49.032 Number of Power States: 1 00:09:49.032 Current Power State: Power State #0 00:09:49.032 Power State #0: 00:09:49.032 Max Power: 25.00 W 00:09:49.032 Non-Operational State: Operational 00:09:49.032 Entry Latency: 16 microseconds 00:09:49.032 Exit Latency: 4 microseconds 00:09:49.032 Relative Read Throughput: 0 00:09:49.032 Relative Read Latency: 0 00:09:49.032 Relative Write Throughput: 0 00:09:49.032 Relative Write Latency: 0 00:09:49.032 Idle Power: Not Reported 00:09:49.032 Active Power: Not Reported 00:09:49.032 Non-Operational Permissive Mode: Not Supported 00:09:49.032 00:09:49.032 Health Information 00:09:49.032 ================== 00:09:49.032 Critical Warnings: 00:09:49.032 Available Spare Space: OK 00:09:49.032 Temperature: OK 00:09:49.032 Device Reliability: OK 00:09:49.032 Read Only: No 00:09:49.032 Volatile Memory Backup: OK 00:09:49.032 Current Temperature: 323 Kelvin (50 Celsius) 00:09:49.032 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:49.032 Available Spare: 0% 00:09:49.032 Available Spare Threshold: 0% 00:09:49.032 Life Percentage Used: 0% 00:09:49.032 Data Units Read: 1130 00:09:49.032 Data Units Written: 996 00:09:49.032 Host Read Commands: 48125 00:09:49.032 Host Write Commands: 46914 00:09:49.032 Controller Busy Time: 0 minutes 00:09:49.032 Power Cycles: 0 00:09:49.032 Power On Hours: 0 hours 00:09:49.032 Unsafe Shutdowns: 0 00:09:49.032 Unrecoverable Media Errors: 0 00:09:49.032 Lifetime Error Log Entries: 0 00:09:49.032 Warning Temperature Time: 0 minutes 00:09:49.032 Critical Temperature Time: 0 minutes 00:09:49.032 00:09:49.032 Number of Queues 00:09:49.032 ================ 00:09:49.032 Number of I/O Submission Queues: 64 00:09:49.032 Number of I/O Completion Queues: 64 00:09:49.032 00:09:49.032 ZNS Specific Controller Data 00:09:49.032 ============================ 00:09:49.032 Zone Append Size Limit: 0 00:09:49.032 00:09:49.032 00:09:49.032 Active Namespaces 00:09:49.032 ================= 00:09:49.032 Namespace ID:1 00:09:49.032 Error Recovery Timeout: Unlimited 00:09:49.032 Command Set Identifier: NVM (00h) 00:09:49.032 Deallocate: Supported 00:09:49.032 Deallocated/Unwritten Error: Supported 00:09:49.032 Deallocated Read Value: All 0x00 00:09:49.032 Deallocate in Write Zeroes: Not Supported 00:09:49.032 Deallocated Guard Field: 0xFFFF 00:09:49.032 Flush: Supported 00:09:49.032 Reservation: Not Supported 00:09:49.032 Namespace Sharing Capabilities: Private 00:09:49.032 Size (in LBAs): 1310720 (5GiB) 00:09:49.032 Capacity (in LBAs): 1310720 (5GiB) 00:09:49.032 Utilization (in LBAs): 1310720 (5GiB) 00:09:49.032 Thin Provisioning: Not Supported 00:09:49.032 Per-NS Atomic Units: No 00:09:49.032 Maximum Single Source Range Length: 128 00:09:49.032 Maximum Copy Length: 128 00:09:49.032 Maximum Source Range Count: 128 00:09:49.032 NGUID/EUI64 Never Reused: No 00:09:49.032 Namespace Write Protected: No 00:09:49.032 Number of LBA Formats: 8 00:09:49.032 Current LBA Format: LBA Format #04 00:09:49.032 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.032 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.032 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.032 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.032 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.032 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.032 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.032 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.032 00:09:49.032 NVM Specific Namespace Data 00:09:49.032 =========================== 00:09:49.032 Logical Block Storage Tag Mask: 0 00:09:49.032 Protection Information Capabilities: 00:09:49.032 16b Guard Protection Information Storage Tag Support: No 00:09:49.032 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:49.032 Storage Tag Check Read Support: No 00:09:49.032 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.032 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.032 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.032 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.032 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.032 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.032 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.032 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.032 03:20:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:49.032 03:20:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:49.292 ===================================================== 00:09:49.293 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:49.293 ===================================================== 00:09:49.293 Controller Capabilities/Features 00:09:49.293 ================================ 00:09:49.293 Vendor ID: 1b36 00:09:49.293 Subsystem Vendor ID: 1af4 00:09:49.293 Serial Number: 12342 00:09:49.293 Model Number: QEMU NVMe Ctrl 00:09:49.293 Firmware Version: 8.0.0 00:09:49.293 Recommended Arb Burst: 6 00:09:49.293 IEEE OUI Identifier: 00 54 52 00:09:49.293 Multi-path I/O 00:09:49.293 May have multiple subsystem ports: No 00:09:49.293 May have multiple controllers: No 00:09:49.293 Associated with SR-IOV VF: No 00:09:49.293 Max Data Transfer Size: 524288 00:09:49.293 Max Number of Namespaces: 256 00:09:49.293 Max Number of I/O Queues: 64 00:09:49.293 NVMe Specification Version (VS): 1.4 00:09:49.293 NVMe Specification Version (Identify): 1.4 00:09:49.293 Maximum Queue Entries: 2048 00:09:49.293 Contiguous Queues Required: Yes 00:09:49.293 Arbitration Mechanisms Supported 00:09:49.293 Weighted Round Robin: Not Supported 00:09:49.293 Vendor Specific: Not Supported 00:09:49.293 Reset Timeout: 7500 ms 00:09:49.293 Doorbell Stride: 4 bytes 00:09:49.293 NVM Subsystem Reset: Not Supported 00:09:49.293 Command Sets Supported 00:09:49.293 NVM Command Set: Supported 00:09:49.293 Boot Partition: Not Supported 00:09:49.293 Memory Page Size Minimum: 4096 bytes 00:09:49.293 Memory Page Size Maximum: 65536 bytes 00:09:49.293 Persistent Memory Region: Not Supported 00:09:49.293 Optional Asynchronous Events Supported 00:09:49.293 Namespace Attribute Notices: Supported 00:09:49.293 Firmware Activation Notices: Not Supported 00:09:49.293 ANA Change Notices: Not Supported 00:09:49.293 PLE Aggregate Log Change Notices: Not Supported 00:09:49.293 LBA Status Info Alert Notices: Not Supported 00:09:49.293 EGE Aggregate Log Change Notices: Not Supported 00:09:49.293 Normal NVM Subsystem Shutdown event: Not Supported 00:09:49.293 Zone Descriptor Change Notices: Not Supported 00:09:49.293 Discovery Log Change Notices: Not Supported 00:09:49.293 Controller Attributes 00:09:49.293 128-bit Host Identifier: Not Supported 00:09:49.293 Non-Operational Permissive Mode: Not Supported 00:09:49.293 NVM Sets: Not Supported 00:09:49.293 Read Recovery Levels: Not Supported 00:09:49.293 Endurance Groups: Not Supported 00:09:49.293 Predictable Latency Mode: Not Supported 00:09:49.293 Traffic Based Keep ALive: Not Supported 00:09:49.293 Namespace Granularity: Not Supported 00:09:49.293 SQ Associations: Not Supported 00:09:49.293 UUID List: Not Supported 00:09:49.293 Multi-Domain Subsystem: Not Supported 00:09:49.293 Fixed Capacity Management: Not Supported 00:09:49.293 Variable Capacity Management: Not Supported 00:09:49.293 Delete Endurance Group: Not Supported 00:09:49.293 Delete NVM Set: Not Supported 00:09:49.293 Extended LBA Formats Supported: Supported 00:09:49.293 Flexible Data Placement Supported: Not Supported 00:09:49.293 00:09:49.293 Controller Memory Buffer Support 00:09:49.293 ================================ 00:09:49.293 Supported: No 00:09:49.293 00:09:49.293 Persistent Memory Region Support 00:09:49.293 ================================ 00:09:49.293 Supported: No 00:09:49.293 00:09:49.293 Admin Command Set Attributes 00:09:49.293 ============================ 00:09:49.293 Security Send/Receive: Not Supported 00:09:49.293 Format NVM: Supported 00:09:49.293 Firmware Activate/Download: Not Supported 00:09:49.293 Namespace Management: Supported 00:09:49.293 Device Self-Test: Not Supported 00:09:49.293 Directives: Supported 00:09:49.293 NVMe-MI: Not Supported 00:09:49.293 Virtualization Management: Not Supported 00:09:49.293 Doorbell Buffer Config: Supported 00:09:49.293 Get LBA Status Capability: Not Supported 00:09:49.293 Command & Feature Lockdown Capability: Not Supported 00:09:49.293 Abort Command Limit: 4 00:09:49.293 Async Event Request Limit: 4 00:09:49.293 Number of Firmware Slots: N/A 00:09:49.293 Firmware Slot 1 Read-Only: N/A 00:09:49.293 Firmware Activation Without Reset: N/A 00:09:49.293 Multiple Update Detection Support: N/A 00:09:49.293 Firmware Update Granularity: No Information Provided 00:09:49.293 Per-Namespace SMART Log: Yes 00:09:49.293 Asymmetric Namespace Access Log Page: Not Supported 00:09:49.293 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:49.293 Command Effects Log Page: Supported 00:09:49.293 Get Log Page Extended Data: Supported 00:09:49.293 Telemetry Log Pages: Not Supported 00:09:49.293 Persistent Event Log Pages: Not Supported 00:09:49.293 Supported Log Pages Log Page: May Support 00:09:49.293 Commands Supported & Effects Log Page: Not Supported 00:09:49.293 Feature Identifiers & Effects Log Page:May Support 00:09:49.293 NVMe-MI Commands & Effects Log Page: May Support 00:09:49.293 Data Area 4 for Telemetry Log: Not Supported 00:09:49.293 Error Log Page Entries Supported: 1 00:09:49.293 Keep Alive: Not Supported 00:09:49.293 00:09:49.293 NVM Command Set Attributes 00:09:49.293 ========================== 00:09:49.293 Submission Queue Entry Size 00:09:49.293 Max: 64 00:09:49.293 Min: 64 00:09:49.293 Completion Queue Entry Size 00:09:49.293 Max: 16 00:09:49.293 Min: 16 00:09:49.293 Number of Namespaces: 256 00:09:49.293 Compare Command: Supported 00:09:49.293 Write Uncorrectable Command: Not Supported 00:09:49.293 Dataset Management Command: Supported 00:09:49.293 Write Zeroes Command: Supported 00:09:49.293 Set Features Save Field: Supported 00:09:49.293 Reservations: Not Supported 00:09:49.293 Timestamp: Supported 00:09:49.293 Copy: Supported 00:09:49.293 Volatile Write Cache: Present 00:09:49.293 Atomic Write Unit (Normal): 1 00:09:49.293 Atomic Write Unit (PFail): 1 00:09:49.293 Atomic Compare & Write Unit: 1 00:09:49.293 Fused Compare & Write: Not Supported 00:09:49.293 Scatter-Gather List 00:09:49.293 SGL Command Set: Supported 00:09:49.293 SGL Keyed: Not Supported 00:09:49.293 SGL Bit Bucket Descriptor: Not Supported 00:09:49.293 SGL Metadata Pointer: Not Supported 00:09:49.293 Oversized SGL: Not Supported 00:09:49.293 SGL Metadata Address: Not Supported 00:09:49.293 SGL Offset: Not Supported 00:09:49.293 Transport SGL Data Block: Not Supported 00:09:49.293 Replay Protected Memory Block: Not Supported 00:09:49.293 00:09:49.293 Firmware Slot Information 00:09:49.293 ========================= 00:09:49.293 Active slot: 1 00:09:49.293 Slot 1 Firmware Revision: 1.0 00:09:49.293 00:09:49.293 00:09:49.293 Commands Supported and Effects 00:09:49.293 ============================== 00:09:49.293 Admin Commands 00:09:49.293 -------------- 00:09:49.293 Delete I/O Submission Queue (00h): Supported 00:09:49.293 Create I/O Submission Queue (01h): Supported 00:09:49.293 Get Log Page (02h): Supported 00:09:49.293 Delete I/O Completion Queue (04h): Supported 00:09:49.293 Create I/O Completion Queue (05h): Supported 00:09:49.293 Identify (06h): Supported 00:09:49.293 Abort (08h): Supported 00:09:49.293 Set Features (09h): Supported 00:09:49.293 Get Features (0Ah): Supported 00:09:49.293 Asynchronous Event Request (0Ch): Supported 00:09:49.293 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:49.293 Directive Send (19h): Supported 00:09:49.293 Directive Receive (1Ah): Supported 00:09:49.293 Virtualization Management (1Ch): Supported 00:09:49.293 Doorbell Buffer Config (7Ch): Supported 00:09:49.293 Format NVM (80h): Supported LBA-Change 00:09:49.293 I/O Commands 00:09:49.293 ------------ 00:09:49.293 Flush (00h): Supported LBA-Change 00:09:49.293 Write (01h): Supported LBA-Change 00:09:49.293 Read (02h): Supported 00:09:49.293 Compare (05h): Supported 00:09:49.293 Write Zeroes (08h): Supported LBA-Change 00:09:49.293 Dataset Management (09h): Supported LBA-Change 00:09:49.293 Unknown (0Ch): Supported 00:09:49.293 Unknown (12h): Supported 00:09:49.293 Copy (19h): Supported LBA-Change 00:09:49.293 Unknown (1Dh): Supported LBA-Change 00:09:49.293 00:09:49.293 Error Log 00:09:49.293 ========= 00:09:49.293 00:09:49.293 Arbitration 00:09:49.293 =========== 00:09:49.293 Arbitration Burst: no limit 00:09:49.293 00:09:49.293 Power Management 00:09:49.293 ================ 00:09:49.293 Number of Power States: 1 00:09:49.293 Current Power State: Power State #0 00:09:49.293 Power State #0: 00:09:49.293 Max Power: 25.00 W 00:09:49.293 Non-Operational State: Operational 00:09:49.293 Entry Latency: 16 microseconds 00:09:49.293 Exit Latency: 4 microseconds 00:09:49.293 Relative Read Throughput: 0 00:09:49.293 Relative Read Latency: 0 00:09:49.293 Relative Write Throughput: 0 00:09:49.293 Relative Write Latency: 0 00:09:49.294 Idle Power: Not Reported 00:09:49.294 Active Power: Not Reported 00:09:49.294 Non-Operational Permissive Mode: Not Supported 00:09:49.294 00:09:49.294 Health Information 00:09:49.294 ================== 00:09:49.294 Critical Warnings: 00:09:49.294 Available Spare Space: OK 00:09:49.294 Temperature: OK 00:09:49.294 Device Reliability: OK 00:09:49.294 Read Only: No 00:09:49.294 Volatile Memory Backup: OK 00:09:49.294 Current Temperature: 323 Kelvin (50 Celsius) 00:09:49.294 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:49.294 Available Spare: 0% 00:09:49.294 Available Spare Threshold: 0% 00:09:49.294 Life Percentage Used: 0% 00:09:49.294 Data Units Read: 2428 00:09:49.294 Data Units Written: 2215 00:09:49.294 Host Read Commands: 101468 00:09:49.294 Host Write Commands: 99737 00:09:49.294 Controller Busy Time: 0 minutes 00:09:49.294 Power Cycles: 0 00:09:49.294 Power On Hours: 0 hours 00:09:49.294 Unsafe Shutdowns: 0 00:09:49.294 Unrecoverable Media Errors: 0 00:09:49.294 Lifetime Error Log Entries: 0 00:09:49.294 Warning Temperature Time: 0 minutes 00:09:49.294 Critical Temperature Time: 0 minutes 00:09:49.294 00:09:49.294 Number of Queues 00:09:49.294 ================ 00:09:49.294 Number of I/O Submission Queues: 64 00:09:49.294 Number of I/O Completion Queues: 64 00:09:49.294 00:09:49.294 ZNS Specific Controller Data 00:09:49.294 ============================ 00:09:49.294 Zone Append Size Limit: 0 00:09:49.294 00:09:49.294 00:09:49.294 Active Namespaces 00:09:49.294 ================= 00:09:49.294 Namespace ID:1 00:09:49.294 Error Recovery Timeout: Unlimited 00:09:49.294 Command Set Identifier: NVM (00h) 00:09:49.294 Deallocate: Supported 00:09:49.294 Deallocated/Unwritten Error: Supported 00:09:49.294 Deallocated Read Value: All 0x00 00:09:49.294 Deallocate in Write Zeroes: Not Supported 00:09:49.294 Deallocated Guard Field: 0xFFFF 00:09:49.294 Flush: Supported 00:09:49.294 Reservation: Not Supported 00:09:49.294 Namespace Sharing Capabilities: Private 00:09:49.294 Size (in LBAs): 1048576 (4GiB) 00:09:49.294 Capacity (in LBAs): 1048576 (4GiB) 00:09:49.294 Utilization (in LBAs): 1048576 (4GiB) 00:09:49.294 Thin Provisioning: Not Supported 00:09:49.294 Per-NS Atomic Units: No 00:09:49.294 Maximum Single Source Range Length: 128 00:09:49.294 Maximum Copy Length: 128 00:09:49.294 Maximum Source Range Count: 128 00:09:49.294 NGUID/EUI64 Never Reused: No 00:09:49.294 Namespace Write Protected: No 00:09:49.294 Number of LBA Formats: 8 00:09:49.294 Current LBA Format: LBA Format #04 00:09:49.294 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.294 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.294 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.294 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.294 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.294 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.294 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.294 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.294 00:09:49.294 NVM Specific Namespace Data 00:09:49.294 =========================== 00:09:49.294 Logical Block Storage Tag Mask: 0 00:09:49.294 Protection Information Capabilities: 00:09:49.294 16b Guard Protection Information Storage Tag Support: No 00:09:49.294 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:49.294 Storage Tag Check Read Support: No 00:09:49.294 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Namespace ID:2 00:09:49.294 Error Recovery Timeout: Unlimited 00:09:49.294 Command Set Identifier: NVM (00h) 00:09:49.294 Deallocate: Supported 00:09:49.294 Deallocated/Unwritten Error: Supported 00:09:49.294 Deallocated Read Value: All 0x00 00:09:49.294 Deallocate in Write Zeroes: Not Supported 00:09:49.294 Deallocated Guard Field: 0xFFFF 00:09:49.294 Flush: Supported 00:09:49.294 Reservation: Not Supported 00:09:49.294 Namespace Sharing Capabilities: Private 00:09:49.294 Size (in LBAs): 1048576 (4GiB) 00:09:49.294 Capacity (in LBAs): 1048576 (4GiB) 00:09:49.294 Utilization (in LBAs): 1048576 (4GiB) 00:09:49.294 Thin Provisioning: Not Supported 00:09:49.294 Per-NS Atomic Units: No 00:09:49.294 Maximum Single Source Range Length: 128 00:09:49.294 Maximum Copy Length: 128 00:09:49.294 Maximum Source Range Count: 128 00:09:49.294 NGUID/EUI64 Never Reused: No 00:09:49.294 Namespace Write Protected: No 00:09:49.294 Number of LBA Formats: 8 00:09:49.294 Current LBA Format: LBA Format #04 00:09:49.294 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.294 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.294 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.294 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.294 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.294 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.294 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.294 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.294 00:09:49.294 NVM Specific Namespace Data 00:09:49.294 =========================== 00:09:49.294 Logical Block Storage Tag Mask: 0 00:09:49.294 Protection Information Capabilities: 00:09:49.294 16b Guard Protection Information Storage Tag Support: No 00:09:49.294 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:49.294 Storage Tag Check Read Support: No 00:09:49.294 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Namespace ID:3 00:09:49.294 Error Recovery Timeout: Unlimited 00:09:49.294 Command Set Identifier: NVM (00h) 00:09:49.294 Deallocate: Supported 00:09:49.294 Deallocated/Unwritten Error: Supported 00:09:49.294 Deallocated Read Value: All 0x00 00:09:49.294 Deallocate in Write Zeroes: Not Supported 00:09:49.294 Deallocated Guard Field: 0xFFFF 00:09:49.294 Flush: Supported 00:09:49.294 Reservation: Not Supported 00:09:49.294 Namespace Sharing Capabilities: Private 00:09:49.294 Size (in LBAs): 1048576 (4GiB) 00:09:49.294 Capacity (in LBAs): 1048576 (4GiB) 00:09:49.294 Utilization (in LBAs): 1048576 (4GiB) 00:09:49.294 Thin Provisioning: Not Supported 00:09:49.294 Per-NS Atomic Units: No 00:09:49.294 Maximum Single Source Range Length: 128 00:09:49.294 Maximum Copy Length: 128 00:09:49.294 Maximum Source Range Count: 128 00:09:49.294 NGUID/EUI64 Never Reused: No 00:09:49.294 Namespace Write Protected: No 00:09:49.294 Number of LBA Formats: 8 00:09:49.294 Current LBA Format: LBA Format #04 00:09:49.294 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.294 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.294 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.294 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.294 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.294 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.294 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.294 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.294 00:09:49.294 NVM Specific Namespace Data 00:09:49.294 =========================== 00:09:49.294 Logical Block Storage Tag Mask: 0 00:09:49.294 Protection Information Capabilities: 00:09:49.294 16b Guard Protection Information Storage Tag Support: No 00:09:49.294 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:49.294 Storage Tag Check Read Support: No 00:09:49.294 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.294 03:20:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:49.295 03:20:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:49.555 ===================================================== 00:09:49.555 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:49.555 ===================================================== 00:09:49.555 Controller Capabilities/Features 00:09:49.555 ================================ 00:09:49.555 Vendor ID: 1b36 00:09:49.555 Subsystem Vendor ID: 1af4 00:09:49.555 Serial Number: 12343 00:09:49.555 Model Number: QEMU NVMe Ctrl 00:09:49.555 Firmware Version: 8.0.0 00:09:49.555 Recommended Arb Burst: 6 00:09:49.555 IEEE OUI Identifier: 00 54 52 00:09:49.555 Multi-path I/O 00:09:49.555 May have multiple subsystem ports: No 00:09:49.555 May have multiple controllers: Yes 00:09:49.555 Associated with SR-IOV VF: No 00:09:49.555 Max Data Transfer Size: 524288 00:09:49.555 Max Number of Namespaces: 256 00:09:49.555 Max Number of I/O Queues: 64 00:09:49.555 NVMe Specification Version (VS): 1.4 00:09:49.555 NVMe Specification Version (Identify): 1.4 00:09:49.555 Maximum Queue Entries: 2048 00:09:49.555 Contiguous Queues Required: Yes 00:09:49.555 Arbitration Mechanisms Supported 00:09:49.555 Weighted Round Robin: Not Supported 00:09:49.555 Vendor Specific: Not Supported 00:09:49.555 Reset Timeout: 7500 ms 00:09:49.555 Doorbell Stride: 4 bytes 00:09:49.555 NVM Subsystem Reset: Not Supported 00:09:49.555 Command Sets Supported 00:09:49.555 NVM Command Set: Supported 00:09:49.555 Boot Partition: Not Supported 00:09:49.555 Memory Page Size Minimum: 4096 bytes 00:09:49.555 Memory Page Size Maximum: 65536 bytes 00:09:49.555 Persistent Memory Region: Not Supported 00:09:49.555 Optional Asynchronous Events Supported 00:09:49.555 Namespace Attribute Notices: Supported 00:09:49.555 Firmware Activation Notices: Not Supported 00:09:49.555 ANA Change Notices: Not Supported 00:09:49.555 PLE Aggregate Log Change Notices: Not Supported 00:09:49.555 LBA Status Info Alert Notices: Not Supported 00:09:49.555 EGE Aggregate Log Change Notices: Not Supported 00:09:49.555 Normal NVM Subsystem Shutdown event: Not Supported 00:09:49.555 Zone Descriptor Change Notices: Not Supported 00:09:49.555 Discovery Log Change Notices: Not Supported 00:09:49.555 Controller Attributes 00:09:49.555 128-bit Host Identifier: Not Supported 00:09:49.555 Non-Operational Permissive Mode: Not Supported 00:09:49.555 NVM Sets: Not Supported 00:09:49.555 Read Recovery Levels: Not Supported 00:09:49.555 Endurance Groups: Supported 00:09:49.555 Predictable Latency Mode: Not Supported 00:09:49.555 Traffic Based Keep ALive: Not Supported 00:09:49.555 Namespace Granularity: Not Supported 00:09:49.555 SQ Associations: Not Supported 00:09:49.555 UUID List: Not Supported 00:09:49.555 Multi-Domain Subsystem: Not Supported 00:09:49.555 Fixed Capacity Management: Not Supported 00:09:49.555 Variable Capacity Management: Not Supported 00:09:49.555 Delete Endurance Group: Not Supported 00:09:49.555 Delete NVM Set: Not Supported 00:09:49.555 Extended LBA Formats Supported: Supported 00:09:49.555 Flexible Data Placement Supported: Supported 00:09:49.555 00:09:49.555 Controller Memory Buffer Support 00:09:49.555 ================================ 00:09:49.555 Supported: No 00:09:49.555 00:09:49.555 Persistent Memory Region Support 00:09:49.555 ================================ 00:09:49.555 Supported: No 00:09:49.555 00:09:49.555 Admin Command Set Attributes 00:09:49.555 ============================ 00:09:49.555 Security Send/Receive: Not Supported 00:09:49.555 Format NVM: Supported 00:09:49.555 Firmware Activate/Download: Not Supported 00:09:49.555 Namespace Management: Supported 00:09:49.555 Device Self-Test: Not Supported 00:09:49.555 Directives: Supported 00:09:49.555 NVMe-MI: Not Supported 00:09:49.555 Virtualization Management: Not Supported 00:09:49.555 Doorbell Buffer Config: Supported 00:09:49.555 Get LBA Status Capability: Not Supported 00:09:49.555 Command & Feature Lockdown Capability: Not Supported 00:09:49.555 Abort Command Limit: 4 00:09:49.555 Async Event Request Limit: 4 00:09:49.555 Number of Firmware Slots: N/A 00:09:49.555 Firmware Slot 1 Read-Only: N/A 00:09:49.555 Firmware Activation Without Reset: N/A 00:09:49.555 Multiple Update Detection Support: N/A 00:09:49.555 Firmware Update Granularity: No Information Provided 00:09:49.555 Per-Namespace SMART Log: Yes 00:09:49.555 Asymmetric Namespace Access Log Page: Not Supported 00:09:49.555 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:49.555 Command Effects Log Page: Supported 00:09:49.555 Get Log Page Extended Data: Supported 00:09:49.555 Telemetry Log Pages: Not Supported 00:09:49.555 Persistent Event Log Pages: Not Supported 00:09:49.555 Supported Log Pages Log Page: May Support 00:09:49.555 Commands Supported & Effects Log Page: Not Supported 00:09:49.555 Feature Identifiers & Effects Log Page:May Support 00:09:49.555 NVMe-MI Commands & Effects Log Page: May Support 00:09:49.555 Data Area 4 for Telemetry Log: Not Supported 00:09:49.555 Error Log Page Entries Supported: 1 00:09:49.555 Keep Alive: Not Supported 00:09:49.555 00:09:49.555 NVM Command Set Attributes 00:09:49.555 ========================== 00:09:49.555 Submission Queue Entry Size 00:09:49.555 Max: 64 00:09:49.555 Min: 64 00:09:49.555 Completion Queue Entry Size 00:09:49.555 Max: 16 00:09:49.555 Min: 16 00:09:49.555 Number of Namespaces: 256 00:09:49.555 Compare Command: Supported 00:09:49.555 Write Uncorrectable Command: Not Supported 00:09:49.555 Dataset Management Command: Supported 00:09:49.555 Write Zeroes Command: Supported 00:09:49.555 Set Features Save Field: Supported 00:09:49.555 Reservations: Not Supported 00:09:49.555 Timestamp: Supported 00:09:49.555 Copy: Supported 00:09:49.555 Volatile Write Cache: Present 00:09:49.555 Atomic Write Unit (Normal): 1 00:09:49.555 Atomic Write Unit (PFail): 1 00:09:49.555 Atomic Compare & Write Unit: 1 00:09:49.555 Fused Compare & Write: Not Supported 00:09:49.555 Scatter-Gather List 00:09:49.555 SGL Command Set: Supported 00:09:49.555 SGL Keyed: Not Supported 00:09:49.555 SGL Bit Bucket Descriptor: Not Supported 00:09:49.555 SGL Metadata Pointer: Not Supported 00:09:49.555 Oversized SGL: Not Supported 00:09:49.555 SGL Metadata Address: Not Supported 00:09:49.555 SGL Offset: Not Supported 00:09:49.555 Transport SGL Data Block: Not Supported 00:09:49.555 Replay Protected Memory Block: Not Supported 00:09:49.555 00:09:49.555 Firmware Slot Information 00:09:49.555 ========================= 00:09:49.555 Active slot: 1 00:09:49.555 Slot 1 Firmware Revision: 1.0 00:09:49.555 00:09:49.555 00:09:49.555 Commands Supported and Effects 00:09:49.555 ============================== 00:09:49.555 Admin Commands 00:09:49.555 -------------- 00:09:49.555 Delete I/O Submission Queue (00h): Supported 00:09:49.555 Create I/O Submission Queue (01h): Supported 00:09:49.555 Get Log Page (02h): Supported 00:09:49.555 Delete I/O Completion Queue (04h): Supported 00:09:49.555 Create I/O Completion Queue (05h): Supported 00:09:49.555 Identify (06h): Supported 00:09:49.555 Abort (08h): Supported 00:09:49.555 Set Features (09h): Supported 00:09:49.555 Get Features (0Ah): Supported 00:09:49.555 Asynchronous Event Request (0Ch): Supported 00:09:49.555 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:49.555 Directive Send (19h): Supported 00:09:49.555 Directive Receive (1Ah): Supported 00:09:49.555 Virtualization Management (1Ch): Supported 00:09:49.555 Doorbell Buffer Config (7Ch): Supported 00:09:49.555 Format NVM (80h): Supported LBA-Change 00:09:49.555 I/O Commands 00:09:49.555 ------------ 00:09:49.555 Flush (00h): Supported LBA-Change 00:09:49.555 Write (01h): Supported LBA-Change 00:09:49.555 Read (02h): Supported 00:09:49.555 Compare (05h): Supported 00:09:49.555 Write Zeroes (08h): Supported LBA-Change 00:09:49.555 Dataset Management (09h): Supported LBA-Change 00:09:49.555 Unknown (0Ch): Supported 00:09:49.555 Unknown (12h): Supported 00:09:49.555 Copy (19h): Supported LBA-Change 00:09:49.555 Unknown (1Dh): Supported LBA-Change 00:09:49.555 00:09:49.555 Error Log 00:09:49.555 ========= 00:09:49.555 00:09:49.555 Arbitration 00:09:49.555 =========== 00:09:49.555 Arbitration Burst: no limit 00:09:49.555 00:09:49.555 Power Management 00:09:49.555 ================ 00:09:49.555 Number of Power States: 1 00:09:49.555 Current Power State: Power State #0 00:09:49.555 Power State #0: 00:09:49.555 Max Power: 25.00 W 00:09:49.555 Non-Operational State: Operational 00:09:49.556 Entry Latency: 16 microseconds 00:09:49.556 Exit Latency: 4 microseconds 00:09:49.556 Relative Read Throughput: 0 00:09:49.556 Relative Read Latency: 0 00:09:49.556 Relative Write Throughput: 0 00:09:49.556 Relative Write Latency: 0 00:09:49.556 Idle Power: Not Reported 00:09:49.556 Active Power: Not Reported 00:09:49.556 Non-Operational Permissive Mode: Not Supported 00:09:49.556 00:09:49.556 Health Information 00:09:49.556 ================== 00:09:49.556 Critical Warnings: 00:09:49.556 Available Spare Space: OK 00:09:49.556 Temperature: OK 00:09:49.556 Device Reliability: OK 00:09:49.556 Read Only: No 00:09:49.556 Volatile Memory Backup: OK 00:09:49.556 Current Temperature: 323 Kelvin (50 Celsius) 00:09:49.556 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:49.556 Available Spare: 0% 00:09:49.556 Available Spare Threshold: 0% 00:09:49.556 Life Percentage Used: 0% 00:09:49.556 Data Units Read: 943 00:09:49.556 Data Units Written: 872 00:09:49.556 Host Read Commands: 34862 00:09:49.556 Host Write Commands: 34285 00:09:49.556 Controller Busy Time: 0 minutes 00:09:49.556 Power Cycles: 0 00:09:49.556 Power On Hours: 0 hours 00:09:49.556 Unsafe Shutdowns: 0 00:09:49.556 Unrecoverable Media Errors: 0 00:09:49.556 Lifetime Error Log Entries: 0 00:09:49.556 Warning Temperature Time: 0 minutes 00:09:49.556 Critical Temperature Time: 0 minutes 00:09:49.556 00:09:49.556 Number of Queues 00:09:49.556 ================ 00:09:49.556 Number of I/O Submission Queues: 64 00:09:49.556 Number of I/O Completion Queues: 64 00:09:49.556 00:09:49.556 ZNS Specific Controller Data 00:09:49.556 ============================ 00:09:49.556 Zone Append Size Limit: 0 00:09:49.556 00:09:49.556 00:09:49.556 Active Namespaces 00:09:49.556 ================= 00:09:49.556 Namespace ID:1 00:09:49.556 Error Recovery Timeout: Unlimited 00:09:49.556 Command Set Identifier: NVM (00h) 00:09:49.556 Deallocate: Supported 00:09:49.556 Deallocated/Unwritten Error: Supported 00:09:49.556 Deallocated Read Value: All 0x00 00:09:49.556 Deallocate in Write Zeroes: Not Supported 00:09:49.556 Deallocated Guard Field: 0xFFFF 00:09:49.556 Flush: Supported 00:09:49.556 Reservation: Not Supported 00:09:49.556 Namespace Sharing Capabilities: Multiple Controllers 00:09:49.556 Size (in LBAs): 262144 (1GiB) 00:09:49.556 Capacity (in LBAs): 262144 (1GiB) 00:09:49.556 Utilization (in LBAs): 262144 (1GiB) 00:09:49.556 Thin Provisioning: Not Supported 00:09:49.556 Per-NS Atomic Units: No 00:09:49.556 Maximum Single Source Range Length: 128 00:09:49.556 Maximum Copy Length: 128 00:09:49.556 Maximum Source Range Count: 128 00:09:49.556 NGUID/EUI64 Never Reused: No 00:09:49.556 Namespace Write Protected: No 00:09:49.556 Endurance group ID: 1 00:09:49.556 Number of LBA Formats: 8 00:09:49.556 Current LBA Format: LBA Format #04 00:09:49.556 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.556 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.556 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.556 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.556 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.556 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.556 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.556 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.556 00:09:49.556 Get Feature FDP: 00:09:49.556 ================ 00:09:49.556 Enabled: Yes 00:09:49.556 FDP configuration index: 0 00:09:49.556 00:09:49.556 FDP configurations log page 00:09:49.556 =========================== 00:09:49.556 Number of FDP configurations: 1 00:09:49.556 Version: 0 00:09:49.556 Size: 112 00:09:49.556 FDP Configuration Descriptor: 0 00:09:49.556 Descriptor Size: 96 00:09:49.556 Reclaim Group Identifier format: 2 00:09:49.556 FDP Volatile Write Cache: Not Present 00:09:49.556 FDP Configuration: Valid 00:09:49.556 Vendor Specific Size: 0 00:09:49.556 Number of Reclaim Groups: 2 00:09:49.556 Number of Recalim Unit Handles: 8 00:09:49.556 Max Placement Identifiers: 128 00:09:49.556 Number of Namespaces Suppprted: 256 00:09:49.556 Reclaim unit Nominal Size: 6000000 bytes 00:09:49.556 Estimated Reclaim Unit Time Limit: Not Reported 00:09:49.556 RUH Desc #000: RUH Type: Initially Isolated 00:09:49.556 RUH Desc #001: RUH Type: Initially Isolated 00:09:49.556 RUH Desc #002: RUH Type: Initially Isolated 00:09:49.556 RUH Desc #003: RUH Type: Initially Isolated 00:09:49.556 RUH Desc #004: RUH Type: Initially Isolated 00:09:49.556 RUH Desc #005: RUH Type: Initially Isolated 00:09:49.556 RUH Desc #006: RUH Type: Initially Isolated 00:09:49.556 RUH Desc #007: RUH Type: Initially Isolated 00:09:49.556 00:09:49.556 FDP reclaim unit handle usage log page 00:09:49.556 ====================================== 00:09:49.556 Number of Reclaim Unit Handles: 8 00:09:49.556 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:49.556 RUH Usage Desc #001: RUH Attributes: Unused 00:09:49.556 RUH Usage Desc #002: RUH Attributes: Unused 00:09:49.556 RUH Usage Desc #003: RUH Attributes: Unused 00:09:49.556 RUH Usage Desc #004: RUH Attributes: Unused 00:09:49.556 RUH Usage Desc #005: RUH Attributes: Unused 00:09:49.556 RUH Usage Desc #006: RUH Attributes: Unused 00:09:49.556 RUH Usage Desc #007: RUH Attributes: Unused 00:09:49.556 00:09:49.556 FDP statistics log page 00:09:49.556 ======================= 00:09:49.556 Host bytes with metadata written: 554606592 00:09:49.556 Media bytes with metadata written: 554684416 00:09:49.556 Media bytes erased: 0 00:09:49.556 00:09:49.556 FDP events log page 00:09:49.556 =================== 00:09:49.556 Number of FDP events: 0 00:09:49.556 00:09:49.556 NVM Specific Namespace Data 00:09:49.556 =========================== 00:09:49.556 Logical Block Storage Tag Mask: 0 00:09:49.556 Protection Information Capabilities: 00:09:49.556 16b Guard Protection Information Storage Tag Support: No 00:09:49.556 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:49.556 Storage Tag Check Read Support: No 00:09:49.556 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.556 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.556 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.556 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.556 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.556 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.556 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.556 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:49.816 ************************************ 00:09:49.816 END TEST nvme_identify 00:09:49.816 ************************************ 00:09:49.816 00:09:49.816 real 0m1.769s 00:09:49.816 user 0m0.635s 00:09:49.816 sys 0m0.908s 00:09:49.816 03:20:13 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:49.816 03:20:13 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:49.816 03:20:13 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:49.816 03:20:13 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:49.816 03:20:13 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:49.816 03:20:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:49.816 ************************************ 00:09:49.816 START TEST nvme_perf 00:09:49.816 ************************************ 00:09:49.816 03:20:13 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:09:49.816 03:20:13 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:51.195 Initializing NVMe Controllers 00:09:51.195 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:51.195 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:51.195 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:51.195 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:51.195 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:51.195 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:51.195 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:51.195 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:51.195 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:51.195 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:51.195 Initialization complete. Launching workers. 00:09:51.195 ======================================================== 00:09:51.195 Latency(us) 00:09:51.195 Device Information : IOPS MiB/s Average min max 00:09:51.195 PCIE (0000:00:10.0) NSID 1 from core 0: 13656.59 160.04 9391.65 7834.43 55469.03 00:09:51.195 PCIE (0000:00:11.0) NSID 1 from core 0: 13656.59 160.04 9373.45 7959.67 52934.31 00:09:51.195 PCIE (0000:00:13.0) NSID 1 from core 0: 13656.59 160.04 9352.43 7957.03 51192.51 00:09:51.195 PCIE (0000:00:12.0) NSID 1 from core 0: 13656.59 160.04 9331.78 7986.28 48457.83 00:09:51.195 PCIE (0000:00:12.0) NSID 2 from core 0: 13656.59 160.04 9312.18 7959.85 46012.30 00:09:51.195 PCIE (0000:00:12.0) NSID 3 from core 0: 13720.40 160.79 9248.85 7945.26 38382.77 00:09:51.195 ======================================================== 00:09:51.195 Total : 82003.34 960.98 9334.99 7834.43 55469.03 00:09:51.195 00:09:51.195 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:51.195 ================================================================================= 00:09:51.195 1.00000% : 8053.822us 00:09:51.195 10.00000% : 8264.379us 00:09:51.195 25.00000% : 8474.937us 00:09:51.195 50.00000% : 8843.412us 00:09:51.195 75.00000% : 9264.527us 00:09:51.195 90.00000% : 10264.675us 00:09:51.195 95.00000% : 10948.986us 00:09:51.195 98.00000% : 11896.495us 00:09:51.195 99.00000% : 13054.561us 00:09:51.195 99.50000% : 48217.651us 00:09:51.195 99.90000% : 55166.047us 00:09:51.195 99.99000% : 55587.161us 00:09:51.195 99.99900% : 55587.161us 00:09:51.195 99.99990% : 55587.161us 00:09:51.195 99.99999% : 55587.161us 00:09:51.195 00:09:51.195 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:51.195 ================================================================================= 00:09:51.195 1.00000% : 8106.461us 00:09:51.195 10.00000% : 8317.018us 00:09:51.195 25.00000% : 8527.576us 00:09:51.195 50.00000% : 8790.773us 00:09:51.195 75.00000% : 9264.527us 00:09:51.195 90.00000% : 10212.035us 00:09:51.195 95.00000% : 10948.986us 00:09:51.195 98.00000% : 11949.134us 00:09:51.195 99.00000% : 12949.282us 00:09:51.195 99.50000% : 45690.962us 00:09:51.195 99.90000% : 52639.357us 00:09:51.195 99.99000% : 53060.472us 00:09:51.195 99.99900% : 53060.472us 00:09:51.195 99.99990% : 53060.472us 00:09:51.195 99.99999% : 53060.472us 00:09:51.195 00:09:51.195 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:51.195 ================================================================================= 00:09:51.195 1.00000% : 8106.461us 00:09:51.195 10.00000% : 8317.018us 00:09:51.195 25.00000% : 8527.576us 00:09:51.195 50.00000% : 8790.773us 00:09:51.195 75.00000% : 9264.527us 00:09:51.195 90.00000% : 10212.035us 00:09:51.195 95.00000% : 10948.986us 00:09:51.195 98.00000% : 11896.495us 00:09:51.195 99.00000% : 12844.003us 00:09:51.195 99.50000% : 44006.503us 00:09:51.195 99.90000% : 50954.898us 00:09:51.195 99.99000% : 51165.455us 00:09:51.195 99.99900% : 51376.013us 00:09:51.195 99.99990% : 51376.013us 00:09:51.195 99.99999% : 51376.013us 00:09:51.195 00:09:51.195 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:51.195 ================================================================================= 00:09:51.195 1.00000% : 8106.461us 00:09:51.195 10.00000% : 8317.018us 00:09:51.195 25.00000% : 8527.576us 00:09:51.195 50.00000% : 8790.773us 00:09:51.195 75.00000% : 9211.888us 00:09:51.195 90.00000% : 10212.035us 00:09:51.195 95.00000% : 10948.986us 00:09:51.195 98.00000% : 12001.773us 00:09:51.195 99.00000% : 12896.643us 00:09:51.195 99.50000% : 41479.814us 00:09:51.195 99.90000% : 48217.651us 00:09:51.195 99.99000% : 48428.209us 00:09:51.195 99.99900% : 48638.766us 00:09:51.195 99.99990% : 48638.766us 00:09:51.195 99.99999% : 48638.766us 00:09:51.195 00:09:51.195 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:51.195 ================================================================================= 00:09:51.195 1.00000% : 8106.461us 00:09:51.195 10.00000% : 8317.018us 00:09:51.195 25.00000% : 8527.576us 00:09:51.195 50.00000% : 8790.773us 00:09:51.195 75.00000% : 9264.527us 00:09:51.195 90.00000% : 10212.035us 00:09:51.195 95.00000% : 10948.986us 00:09:51.195 98.00000% : 12001.773us 00:09:51.195 99.00000% : 13212.479us 00:09:51.195 99.50000% : 39163.682us 00:09:51.195 99.90000% : 45690.962us 00:09:51.195 99.99000% : 46112.077us 00:09:51.195 99.99900% : 46112.077us 00:09:51.195 99.99990% : 46112.077us 00:09:51.195 99.99999% : 46112.077us 00:09:51.195 00:09:51.195 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:51.195 ================================================================================= 00:09:51.195 1.00000% : 8106.461us 00:09:51.195 10.00000% : 8317.018us 00:09:51.195 25.00000% : 8527.576us 00:09:51.195 50.00000% : 8843.412us 00:09:51.195 75.00000% : 9264.527us 00:09:51.195 90.00000% : 10264.675us 00:09:51.195 95.00000% : 11001.626us 00:09:51.195 98.00000% : 12212.331us 00:09:51.195 99.00000% : 13423.036us 00:09:51.195 99.50000% : 31373.057us 00:09:51.195 99.90000% : 38110.895us 00:09:51.195 99.99000% : 38532.010us 00:09:51.195 99.99900% : 38532.010us 00:09:51.195 99.99990% : 38532.010us 00:09:51.195 99.99999% : 38532.010us 00:09:51.195 00:09:51.195 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:51.195 ============================================================================== 00:09:51.195 Range in us Cumulative IO count 00:09:51.195 7790.625 - 7843.264: 0.0073% ( 1) 00:09:51.195 7843.264 - 7895.904: 0.0365% ( 4) 00:09:51.195 7895.904 - 7948.543: 0.2848% ( 34) 00:09:51.195 7948.543 - 8001.182: 0.8835% ( 82) 00:09:51.195 8001.182 - 8053.822: 2.2780% ( 191) 00:09:51.195 8053.822 - 8106.461: 4.1910% ( 262) 00:09:51.195 8106.461 - 8159.100: 6.7465% ( 350) 00:09:51.195 8159.100 - 8211.740: 9.6379% ( 396) 00:09:51.195 8211.740 - 8264.379: 12.8797% ( 444) 00:09:51.195 8264.379 - 8317.018: 15.9682% ( 423) 00:09:51.195 8317.018 - 8369.658: 19.3341% ( 461) 00:09:51.195 8369.658 - 8422.297: 22.8826% ( 486) 00:09:51.195 8422.297 - 8474.937: 26.2631% ( 463) 00:09:51.195 8474.937 - 8527.576: 30.0307% ( 516) 00:09:51.195 8527.576 - 8580.215: 33.5280% ( 479) 00:09:51.195 8580.215 - 8632.855: 37.3102% ( 518) 00:09:51.195 8632.855 - 8685.494: 40.9755% ( 502) 00:09:51.195 8685.494 - 8738.133: 44.7284% ( 514) 00:09:51.195 8738.133 - 8790.773: 48.4083% ( 504) 00:09:51.195 8790.773 - 8843.412: 52.3876% ( 545) 00:09:51.195 8843.412 - 8896.051: 56.1843% ( 520) 00:09:51.195 8896.051 - 8948.691: 60.0540% ( 530) 00:09:51.195 8948.691 - 9001.330: 63.7485% ( 506) 00:09:51.195 9001.330 - 9053.969: 67.2167% ( 475) 00:09:51.195 9053.969 - 9106.609: 70.2103% ( 410) 00:09:51.195 9106.609 - 9159.248: 72.6270% ( 331) 00:09:51.195 9159.248 - 9211.888: 74.5400% ( 262) 00:09:51.195 9211.888 - 9264.527: 75.8178% ( 175) 00:09:51.195 9264.527 - 9317.166: 76.9422% ( 154) 00:09:51.195 9317.166 - 9369.806: 77.9571% ( 139) 00:09:51.195 9369.806 - 9422.445: 78.8916% ( 128) 00:09:51.195 9422.445 - 9475.084: 79.8554% ( 132) 00:09:51.195 9475.084 - 9527.724: 80.6440% ( 108) 00:09:51.195 9527.724 - 9580.363: 81.5202% ( 120) 00:09:51.195 9580.363 - 9633.002: 82.2430% ( 99) 00:09:51.195 9633.002 - 9685.642: 83.0754% ( 114) 00:09:51.195 9685.642 - 9738.281: 83.7836% ( 97) 00:09:51.195 9738.281 - 9790.920: 84.5210% ( 101) 00:09:51.196 9790.920 - 9843.560: 85.2731% ( 103) 00:09:51.196 9843.560 - 9896.199: 85.9156% ( 88) 00:09:51.196 9896.199 - 9948.839: 86.6238% ( 97) 00:09:51.196 9948.839 - 10001.478: 87.3394% ( 98) 00:09:51.196 10001.478 - 10054.117: 88.0403% ( 96) 00:09:51.196 10054.117 - 10106.757: 88.7266% ( 94) 00:09:51.196 10106.757 - 10159.396: 89.3546% ( 86) 00:09:51.196 10159.396 - 10212.035: 89.9460% ( 81) 00:09:51.196 10212.035 - 10264.675: 90.4571% ( 70) 00:09:51.196 10264.675 - 10317.314: 90.9755% ( 71) 00:09:51.196 10317.314 - 10369.953: 91.4209% ( 61) 00:09:51.196 10369.953 - 10422.593: 91.8881% ( 64) 00:09:51.196 10422.593 - 10475.232: 92.3262% ( 60) 00:09:51.196 10475.232 - 10527.871: 92.6986% ( 51) 00:09:51.196 10527.871 - 10580.511: 93.0637% ( 50) 00:09:51.196 10580.511 - 10633.150: 93.4287% ( 50) 00:09:51.196 10633.150 - 10685.790: 93.7719% ( 47) 00:09:51.196 10685.790 - 10738.429: 94.1151% ( 47) 00:09:51.196 10738.429 - 10791.068: 94.3414% ( 31) 00:09:51.196 10791.068 - 10843.708: 94.6481% ( 42) 00:09:51.196 10843.708 - 10896.347: 94.8890% ( 33) 00:09:51.196 10896.347 - 10948.986: 95.1300% ( 33) 00:09:51.196 10948.986 - 11001.626: 95.4147% ( 39) 00:09:51.196 11001.626 - 11054.265: 95.6265% ( 29) 00:09:51.196 11054.265 - 11106.904: 95.8674% ( 33) 00:09:51.196 11106.904 - 11159.544: 96.1084% ( 33) 00:09:51.196 11159.544 - 11212.183: 96.2836% ( 24) 00:09:51.196 11212.183 - 11264.822: 96.4369% ( 21) 00:09:51.196 11264.822 - 11317.462: 96.5829% ( 20) 00:09:51.196 11317.462 - 11370.101: 96.7436% ( 22) 00:09:51.196 11370.101 - 11422.741: 96.8896% ( 20) 00:09:51.196 11422.741 - 11475.380: 97.0648% ( 24) 00:09:51.196 11475.380 - 11528.019: 97.2182% ( 21) 00:09:51.196 11528.019 - 11580.659: 97.3423% ( 17) 00:09:51.196 11580.659 - 11633.298: 97.5102% ( 23) 00:09:51.196 11633.298 - 11685.937: 97.6270% ( 16) 00:09:51.196 11685.937 - 11738.577: 97.7439% ( 16) 00:09:51.196 11738.577 - 11791.216: 97.8607% ( 16) 00:09:51.196 11791.216 - 11843.855: 97.9556% ( 13) 00:09:51.196 11843.855 - 11896.495: 98.0286% ( 10) 00:09:51.196 11896.495 - 11949.134: 98.1235% ( 13) 00:09:51.196 11949.134 - 12001.773: 98.1746% ( 7) 00:09:51.196 12001.773 - 12054.413: 98.2550% ( 11) 00:09:51.196 12054.413 - 12107.052: 98.3061% ( 7) 00:09:51.196 12107.052 - 12159.692: 98.3718% ( 9) 00:09:51.196 12159.692 - 12212.331: 98.4156% ( 6) 00:09:51.196 12212.331 - 12264.970: 98.4594% ( 6) 00:09:51.196 12264.970 - 12317.610: 98.4959% ( 5) 00:09:51.196 12317.610 - 12370.249: 98.5397% ( 6) 00:09:51.196 12370.249 - 12422.888: 98.5835% ( 6) 00:09:51.196 12422.888 - 12475.528: 98.6200% ( 5) 00:09:51.196 12475.528 - 12528.167: 98.6711% ( 7) 00:09:51.196 12528.167 - 12580.806: 98.7004% ( 4) 00:09:51.196 12580.806 - 12633.446: 98.7442% ( 6) 00:09:51.196 12633.446 - 12686.085: 98.7880% ( 6) 00:09:51.196 12686.085 - 12738.724: 98.8318% ( 6) 00:09:51.196 12738.724 - 12791.364: 98.8756% ( 6) 00:09:51.196 12791.364 - 12844.003: 98.9048% ( 4) 00:09:51.196 12844.003 - 12896.643: 98.9486% ( 6) 00:09:51.196 12896.643 - 12949.282: 98.9924% ( 6) 00:09:51.196 12949.282 - 13001.921: 98.9997% ( 1) 00:09:51.196 13001.921 - 13054.561: 99.0216% ( 3) 00:09:51.196 13054.561 - 13107.200: 99.0362% ( 2) 00:09:51.196 13107.200 - 13159.839: 99.0654% ( 4) 00:09:51.196 45901.520 - 46112.077: 99.0946% ( 4) 00:09:51.196 46112.077 - 46322.635: 99.1384% ( 6) 00:09:51.196 46322.635 - 46533.192: 99.1822% ( 6) 00:09:51.196 46533.192 - 46743.749: 99.2261% ( 6) 00:09:51.196 46743.749 - 46954.307: 99.2626% ( 5) 00:09:51.196 46954.307 - 47164.864: 99.3137% ( 7) 00:09:51.196 47164.864 - 47375.422: 99.3648% ( 7) 00:09:51.196 47375.422 - 47585.979: 99.4013% ( 5) 00:09:51.196 47585.979 - 47796.537: 99.4451% ( 6) 00:09:51.196 47796.537 - 48007.094: 99.4889% ( 6) 00:09:51.196 48007.094 - 48217.651: 99.5327% ( 6) 00:09:51.196 53060.472 - 53271.030: 99.5473% ( 2) 00:09:51.196 53271.030 - 53481.587: 99.5838% ( 5) 00:09:51.196 53481.587 - 53692.145: 99.6349% ( 7) 00:09:51.196 53692.145 - 53902.702: 99.6714% ( 5) 00:09:51.196 53902.702 - 54323.817: 99.7664% ( 13) 00:09:51.196 54323.817 - 54744.932: 99.8540% ( 12) 00:09:51.196 54744.932 - 55166.047: 99.9416% ( 12) 00:09:51.196 55166.047 - 55587.161: 100.0000% ( 8) 00:09:51.196 00:09:51.196 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:51.196 ============================================================================== 00:09:51.196 Range in us Cumulative IO count 00:09:51.196 7948.543 - 8001.182: 0.0803% ( 11) 00:09:51.196 8001.182 - 8053.822: 0.5184% ( 60) 00:09:51.196 8053.822 - 8106.461: 1.5917% ( 147) 00:09:51.196 8106.461 - 8159.100: 3.2929% ( 233) 00:09:51.196 8159.100 - 8211.740: 5.4103% ( 290) 00:09:51.196 8211.740 - 8264.379: 8.9223% ( 481) 00:09:51.196 8264.379 - 8317.018: 12.7555% ( 525) 00:09:51.196 8317.018 - 8369.658: 16.5012% ( 513) 00:09:51.196 8369.658 - 8422.297: 20.4001% ( 534) 00:09:51.196 8422.297 - 8474.937: 24.5254% ( 565) 00:09:51.196 8474.937 - 8527.576: 28.6069% ( 559) 00:09:51.196 8527.576 - 8580.215: 32.8417% ( 580) 00:09:51.196 8580.215 - 8632.855: 37.1568% ( 591) 00:09:51.196 8632.855 - 8685.494: 41.5888% ( 607) 00:09:51.196 8685.494 - 8738.133: 45.9769% ( 601) 00:09:51.196 8738.133 - 8790.773: 50.3870% ( 604) 00:09:51.196 8790.773 - 8843.412: 54.7532% ( 598) 00:09:51.196 8843.412 - 8896.051: 59.1487% ( 602) 00:09:51.196 8896.051 - 8948.691: 63.2447% ( 561) 00:09:51.196 8948.691 - 9001.330: 66.8151% ( 489) 00:09:51.196 9001.330 - 9053.969: 69.7941% ( 408) 00:09:51.196 9053.969 - 9106.609: 72.1525% ( 323) 00:09:51.196 9106.609 - 9159.248: 73.7588% ( 220) 00:09:51.196 9159.248 - 9211.888: 74.9489% ( 163) 00:09:51.196 9211.888 - 9264.527: 75.9930% ( 143) 00:09:51.196 9264.527 - 9317.166: 77.1174% ( 154) 00:09:51.196 9317.166 - 9369.806: 78.0812% ( 132) 00:09:51.196 9369.806 - 9422.445: 79.0012% ( 126) 00:09:51.196 9422.445 - 9475.084: 79.9138% ( 125) 00:09:51.196 9475.084 - 9527.724: 80.6732% ( 104) 00:09:51.196 9527.724 - 9580.363: 81.4179% ( 102) 00:09:51.196 9580.363 - 9633.002: 82.0824% ( 91) 00:09:51.196 9633.002 - 9685.642: 82.7541% ( 92) 00:09:51.196 9685.642 - 9738.281: 83.4696% ( 98) 00:09:51.196 9738.281 - 9790.920: 84.2290% ( 104) 00:09:51.196 9790.920 - 9843.560: 84.9591% ( 100) 00:09:51.196 9843.560 - 9896.199: 85.7842% ( 113) 00:09:51.196 9896.199 - 9948.839: 86.6019% ( 112) 00:09:51.196 9948.839 - 10001.478: 87.4270% ( 113) 00:09:51.196 10001.478 - 10054.117: 88.2155% ( 108) 00:09:51.196 10054.117 - 10106.757: 88.9311% ( 98) 00:09:51.196 10106.757 - 10159.396: 89.5736% ( 88) 00:09:51.196 10159.396 - 10212.035: 90.2161% ( 88) 00:09:51.196 10212.035 - 10264.675: 90.7272% ( 70) 00:09:51.196 10264.675 - 10317.314: 91.2164% ( 67) 00:09:51.196 10317.314 - 10369.953: 91.6545% ( 60) 00:09:51.196 10369.953 - 10422.593: 92.0342% ( 52) 00:09:51.196 10422.593 - 10475.232: 92.4211% ( 53) 00:09:51.196 10475.232 - 10527.871: 92.8081% ( 53) 00:09:51.196 10527.871 - 10580.511: 93.2243% ( 57) 00:09:51.196 10580.511 - 10633.150: 93.5894% ( 50) 00:09:51.196 10633.150 - 10685.790: 93.9982% ( 56) 00:09:51.196 10685.790 - 10738.429: 94.3195% ( 44) 00:09:51.196 10738.429 - 10791.068: 94.5312% ( 29) 00:09:51.196 10791.068 - 10843.708: 94.7357% ( 28) 00:09:51.196 10843.708 - 10896.347: 94.9401% ( 28) 00:09:51.196 10896.347 - 10948.986: 95.1592% ( 30) 00:09:51.196 10948.986 - 11001.626: 95.3636% ( 28) 00:09:51.196 11001.626 - 11054.265: 95.5680% ( 28) 00:09:51.196 11054.265 - 11106.904: 95.7725% ( 28) 00:09:51.196 11106.904 - 11159.544: 95.9769% ( 28) 00:09:51.196 11159.544 - 11212.183: 96.1741% ( 27) 00:09:51.196 11212.183 - 11264.822: 96.3420% ( 23) 00:09:51.196 11264.822 - 11317.462: 96.5318% ( 26) 00:09:51.196 11317.462 - 11370.101: 96.7144% ( 25) 00:09:51.196 11370.101 - 11422.741: 96.8896% ( 24) 00:09:51.196 11422.741 - 11475.380: 97.0721% ( 25) 00:09:51.196 11475.380 - 11528.019: 97.2328% ( 22) 00:09:51.196 11528.019 - 11580.659: 97.3569% ( 17) 00:09:51.196 11580.659 - 11633.298: 97.4810% ( 17) 00:09:51.196 11633.298 - 11685.937: 97.5686% ( 12) 00:09:51.196 11685.937 - 11738.577: 97.6636% ( 13) 00:09:51.196 11738.577 - 11791.216: 97.7439% ( 11) 00:09:51.196 11791.216 - 11843.855: 97.8753% ( 18) 00:09:51.196 11843.855 - 11896.495: 97.9629% ( 12) 00:09:51.196 11896.495 - 11949.134: 98.0140% ( 7) 00:09:51.196 11949.134 - 12001.773: 98.0578% ( 6) 00:09:51.196 12001.773 - 12054.413: 98.1089% ( 7) 00:09:51.196 12054.413 - 12107.052: 98.1673% ( 8) 00:09:51.196 12107.052 - 12159.692: 98.2404% ( 10) 00:09:51.196 12159.692 - 12212.331: 98.3134% ( 10) 00:09:51.196 12212.331 - 12264.970: 98.3791% ( 9) 00:09:51.196 12264.970 - 12317.610: 98.4302% ( 7) 00:09:51.196 12317.610 - 12370.249: 98.4886% ( 8) 00:09:51.196 12370.249 - 12422.888: 98.5543% ( 9) 00:09:51.196 12422.888 - 12475.528: 98.6273% ( 10) 00:09:51.196 12475.528 - 12528.167: 98.6930% ( 9) 00:09:51.196 12528.167 - 12580.806: 98.7661% ( 10) 00:09:51.196 12580.806 - 12633.446: 98.8172% ( 7) 00:09:51.196 12633.446 - 12686.085: 98.8683% ( 7) 00:09:51.196 12686.085 - 12738.724: 98.9194% ( 7) 00:09:51.196 12738.724 - 12791.364: 98.9413% ( 3) 00:09:51.196 12791.364 - 12844.003: 98.9705% ( 4) 00:09:51.196 12844.003 - 12896.643: 98.9924% ( 3) 00:09:51.196 12896.643 - 12949.282: 99.0216% ( 4) 00:09:51.196 12949.282 - 13001.921: 99.0435% ( 3) 00:09:51.196 13001.921 - 13054.561: 99.0654% ( 3) 00:09:51.196 43585.388 - 43795.945: 99.0727% ( 1) 00:09:51.196 43795.945 - 44006.503: 99.1165% ( 6) 00:09:51.197 44006.503 - 44217.060: 99.1676% ( 7) 00:09:51.197 44217.060 - 44427.618: 99.2114% ( 6) 00:09:51.197 44427.618 - 44638.175: 99.2626% ( 7) 00:09:51.197 44638.175 - 44848.733: 99.3137% ( 7) 00:09:51.197 44848.733 - 45059.290: 99.3575% ( 6) 00:09:51.197 45059.290 - 45269.847: 99.4086% ( 7) 00:09:51.197 45269.847 - 45480.405: 99.4524% ( 6) 00:09:51.197 45480.405 - 45690.962: 99.5035% ( 7) 00:09:51.197 45690.962 - 45901.520: 99.5327% ( 4) 00:09:51.197 50744.341 - 50954.898: 99.5546% ( 3) 00:09:51.197 50954.898 - 51165.455: 99.5984% ( 6) 00:09:51.197 51165.455 - 51376.013: 99.6422% ( 6) 00:09:51.197 51376.013 - 51586.570: 99.6860% ( 6) 00:09:51.197 51586.570 - 51797.128: 99.7298% ( 6) 00:09:51.197 51797.128 - 52007.685: 99.7810% ( 7) 00:09:51.197 52007.685 - 52218.243: 99.8248% ( 6) 00:09:51.197 52218.243 - 52428.800: 99.8759% ( 7) 00:09:51.197 52428.800 - 52639.357: 99.9270% ( 7) 00:09:51.197 52639.357 - 52849.915: 99.9708% ( 6) 00:09:51.197 52849.915 - 53060.472: 100.0000% ( 4) 00:09:51.197 00:09:51.197 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:51.197 ============================================================================== 00:09:51.197 Range in us Cumulative IO count 00:09:51.197 7948.543 - 8001.182: 0.0876% ( 12) 00:09:51.197 8001.182 - 8053.822: 0.5695% ( 66) 00:09:51.197 8053.822 - 8106.461: 1.5844% ( 139) 00:09:51.197 8106.461 - 8159.100: 3.2272% ( 225) 00:09:51.197 8159.100 - 8211.740: 5.8411% ( 358) 00:09:51.197 8211.740 - 8264.379: 9.1998% ( 460) 00:09:51.197 8264.379 - 8317.018: 12.7263% ( 483) 00:09:51.197 8317.018 - 8369.658: 16.5012% ( 517) 00:09:51.197 8369.658 - 8422.297: 20.4293% ( 538) 00:09:51.197 8422.297 - 8474.937: 24.4451% ( 550) 00:09:51.197 8474.937 - 8527.576: 28.7018% ( 583) 00:09:51.197 8527.576 - 8580.215: 32.8709% ( 571) 00:09:51.197 8580.215 - 8632.855: 37.2152% ( 595) 00:09:51.197 8632.855 - 8685.494: 41.6326% ( 605) 00:09:51.197 8685.494 - 8738.133: 46.0426% ( 604) 00:09:51.197 8738.133 - 8790.773: 50.5476% ( 617) 00:09:51.197 8790.773 - 8843.412: 55.0453% ( 616) 00:09:51.197 8843.412 - 8896.051: 59.4042% ( 597) 00:09:51.197 8896.051 - 8948.691: 63.4565% ( 555) 00:09:51.197 8948.691 - 9001.330: 66.9539% ( 479) 00:09:51.197 9001.330 - 9053.969: 69.9839% ( 415) 00:09:51.197 9053.969 - 9106.609: 71.9991% ( 276) 00:09:51.197 9106.609 - 9159.248: 73.5762% ( 216) 00:09:51.197 9159.248 - 9211.888: 74.8102% ( 169) 00:09:51.197 9211.888 - 9264.527: 75.9054% ( 150) 00:09:51.197 9264.527 - 9317.166: 76.9349% ( 141) 00:09:51.197 9317.166 - 9369.806: 78.0593% ( 154) 00:09:51.197 9369.806 - 9422.445: 78.9355% ( 120) 00:09:51.197 9422.445 - 9475.084: 79.7897% ( 117) 00:09:51.197 9475.084 - 9527.724: 80.5418% ( 103) 00:09:51.197 9527.724 - 9580.363: 81.2500% ( 97) 00:09:51.197 9580.363 - 9633.002: 82.0020% ( 103) 00:09:51.197 9633.002 - 9685.642: 82.6884% ( 94) 00:09:51.197 9685.642 - 9738.281: 83.4112% ( 99) 00:09:51.197 9738.281 - 9790.920: 84.1706% ( 104) 00:09:51.197 9790.920 - 9843.560: 85.0686% ( 123) 00:09:51.197 9843.560 - 9896.199: 85.9302% ( 118) 00:09:51.197 9896.199 - 9948.839: 86.7699% ( 115) 00:09:51.197 9948.839 - 10001.478: 87.6533% ( 121) 00:09:51.197 10001.478 - 10054.117: 88.4784% ( 113) 00:09:51.197 10054.117 - 10106.757: 89.2742% ( 109) 00:09:51.197 10106.757 - 10159.396: 89.9387% ( 91) 00:09:51.197 10159.396 - 10212.035: 90.5447% ( 83) 00:09:51.197 10212.035 - 10264.675: 91.0558% ( 70) 00:09:51.197 10264.675 - 10317.314: 91.4793% ( 58) 00:09:51.197 10317.314 - 10369.953: 91.9027% ( 58) 00:09:51.197 10369.953 - 10422.593: 92.3627% ( 63) 00:09:51.197 10422.593 - 10475.232: 92.7278% ( 50) 00:09:51.197 10475.232 - 10527.871: 93.1002% ( 51) 00:09:51.197 10527.871 - 10580.511: 93.3922% ( 40) 00:09:51.197 10580.511 - 10633.150: 93.7500% ( 49) 00:09:51.197 10633.150 - 10685.790: 94.0129% ( 36) 00:09:51.197 10685.790 - 10738.429: 94.2392% ( 31) 00:09:51.197 10738.429 - 10791.068: 94.4728% ( 32) 00:09:51.197 10791.068 - 10843.708: 94.6627% ( 26) 00:09:51.197 10843.708 - 10896.347: 94.8817% ( 30) 00:09:51.197 10896.347 - 10948.986: 95.1154% ( 32) 00:09:51.197 10948.986 - 11001.626: 95.3198% ( 28) 00:09:51.197 11001.626 - 11054.265: 95.5388% ( 30) 00:09:51.197 11054.265 - 11106.904: 95.7287% ( 26) 00:09:51.197 11106.904 - 11159.544: 95.9477% ( 30) 00:09:51.197 11159.544 - 11212.183: 96.1668% ( 30) 00:09:51.197 11212.183 - 11264.822: 96.3858% ( 30) 00:09:51.197 11264.822 - 11317.462: 96.5829% ( 27) 00:09:51.197 11317.462 - 11370.101: 96.7582% ( 24) 00:09:51.197 11370.101 - 11422.741: 96.9261% ( 23) 00:09:51.197 11422.741 - 11475.380: 97.0867% ( 22) 00:09:51.197 11475.380 - 11528.019: 97.2401% ( 21) 00:09:51.197 11528.019 - 11580.659: 97.3934% ( 21) 00:09:51.197 11580.659 - 11633.298: 97.5613% ( 23) 00:09:51.197 11633.298 - 11685.937: 97.6709% ( 15) 00:09:51.197 11685.937 - 11738.577: 97.7804% ( 15) 00:09:51.197 11738.577 - 11791.216: 97.8899% ( 15) 00:09:51.197 11791.216 - 11843.855: 97.9629% ( 10) 00:09:51.197 11843.855 - 11896.495: 98.0359% ( 10) 00:09:51.197 11896.495 - 11949.134: 98.0943% ( 8) 00:09:51.197 11949.134 - 12001.773: 98.1600% ( 9) 00:09:51.197 12001.773 - 12054.413: 98.2185% ( 8) 00:09:51.197 12054.413 - 12107.052: 98.2842% ( 9) 00:09:51.197 12107.052 - 12159.692: 98.3499% ( 9) 00:09:51.197 12159.692 - 12212.331: 98.4375% ( 12) 00:09:51.197 12212.331 - 12264.970: 98.5032% ( 9) 00:09:51.197 12264.970 - 12317.610: 98.5689% ( 9) 00:09:51.197 12317.610 - 12370.249: 98.6273% ( 8) 00:09:51.197 12370.249 - 12422.888: 98.6930% ( 9) 00:09:51.197 12422.888 - 12475.528: 98.7515% ( 8) 00:09:51.197 12475.528 - 12528.167: 98.8318% ( 11) 00:09:51.197 12528.167 - 12580.806: 98.8829% ( 7) 00:09:51.197 12580.806 - 12633.446: 98.9121% ( 4) 00:09:51.197 12633.446 - 12686.085: 98.9340% ( 3) 00:09:51.197 12686.085 - 12738.724: 98.9559% ( 3) 00:09:51.197 12738.724 - 12791.364: 98.9851% ( 4) 00:09:51.197 12791.364 - 12844.003: 99.0070% ( 3) 00:09:51.197 12844.003 - 12896.643: 99.0289% ( 3) 00:09:51.197 12896.643 - 12949.282: 99.0581% ( 4) 00:09:51.197 12949.282 - 13001.921: 99.0654% ( 1) 00:09:51.197 41690.371 - 41900.929: 99.0873% ( 3) 00:09:51.197 41900.929 - 42111.486: 99.1311% ( 6) 00:09:51.197 42111.486 - 42322.043: 99.1822% ( 7) 00:09:51.197 42322.043 - 42532.601: 99.2261% ( 6) 00:09:51.197 42532.601 - 42743.158: 99.2772% ( 7) 00:09:51.197 42743.158 - 42953.716: 99.3210% ( 6) 00:09:51.197 42953.716 - 43164.273: 99.3721% ( 7) 00:09:51.197 43164.273 - 43374.831: 99.4013% ( 4) 00:09:51.197 43374.831 - 43585.388: 99.4305% ( 4) 00:09:51.197 43585.388 - 43795.945: 99.4743% ( 6) 00:09:51.197 43795.945 - 44006.503: 99.5254% ( 7) 00:09:51.197 44006.503 - 44217.060: 99.5327% ( 1) 00:09:51.197 49059.881 - 49270.439: 99.5765% ( 6) 00:09:51.197 49270.439 - 49480.996: 99.6203% ( 6) 00:09:51.197 49480.996 - 49691.553: 99.6641% ( 6) 00:09:51.197 49691.553 - 49902.111: 99.7152% ( 7) 00:09:51.197 49902.111 - 50112.668: 99.7591% ( 6) 00:09:51.197 50112.668 - 50323.226: 99.8102% ( 7) 00:09:51.197 50323.226 - 50533.783: 99.8540% ( 6) 00:09:51.197 50533.783 - 50744.341: 99.8978% ( 6) 00:09:51.197 50744.341 - 50954.898: 99.9489% ( 7) 00:09:51.197 50954.898 - 51165.455: 99.9927% ( 6) 00:09:51.197 51165.455 - 51376.013: 100.0000% ( 1) 00:09:51.197 00:09:51.197 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:51.197 ============================================================================== 00:09:51.197 Range in us Cumulative IO count 00:09:51.197 7948.543 - 8001.182: 0.0219% ( 3) 00:09:51.197 8001.182 - 8053.822: 0.3724% ( 48) 00:09:51.197 8053.822 - 8106.461: 1.4384% ( 146) 00:09:51.197 8106.461 - 8159.100: 3.3367% ( 260) 00:09:51.197 8159.100 - 8211.740: 5.9141% ( 353) 00:09:51.197 8211.740 - 8264.379: 9.0610% ( 431) 00:09:51.197 8264.379 - 8317.018: 12.4708% ( 467) 00:09:51.197 8317.018 - 8369.658: 16.3478% ( 531) 00:09:51.197 8369.658 - 8422.297: 20.4585% ( 563) 00:09:51.197 8422.297 - 8474.937: 24.4743% ( 550) 00:09:51.197 8474.937 - 8527.576: 28.6726% ( 575) 00:09:51.197 8527.576 - 8580.215: 32.8417% ( 571) 00:09:51.197 8580.215 - 8632.855: 37.1860% ( 595) 00:09:51.197 8632.855 - 8685.494: 41.5450% ( 597) 00:09:51.197 8685.494 - 8738.133: 45.9915% ( 609) 00:09:51.197 8738.133 - 8790.773: 50.4235% ( 607) 00:09:51.197 8790.773 - 8843.412: 54.9357% ( 618) 00:09:51.197 8843.412 - 8896.051: 59.2874% ( 596) 00:09:51.197 8896.051 - 8948.691: 63.3470% ( 556) 00:09:51.197 8948.691 - 9001.330: 66.9393% ( 492) 00:09:51.197 9001.330 - 9053.969: 69.8890% ( 404) 00:09:51.197 9053.969 - 9106.609: 72.0721% ( 299) 00:09:51.197 9106.609 - 9159.248: 73.7953% ( 236) 00:09:51.197 9159.248 - 9211.888: 75.0219% ( 168) 00:09:51.197 9211.888 - 9264.527: 76.0295% ( 138) 00:09:51.197 9264.527 - 9317.166: 77.0079% ( 134) 00:09:51.197 9317.166 - 9369.806: 77.9279% ( 126) 00:09:51.197 9369.806 - 9422.445: 78.7894% ( 118) 00:09:51.197 9422.445 - 9475.084: 79.5050% ( 98) 00:09:51.197 9475.084 - 9527.724: 80.3811% ( 120) 00:09:51.197 9527.724 - 9580.363: 81.1478% ( 105) 00:09:51.197 9580.363 - 9633.002: 81.9436% ( 109) 00:09:51.197 9633.002 - 9685.642: 82.6811% ( 101) 00:09:51.197 9685.642 - 9738.281: 83.4039% ( 99) 00:09:51.197 9738.281 - 9790.920: 84.2363% ( 114) 00:09:51.197 9790.920 - 9843.560: 85.0905% ( 117) 00:09:51.197 9843.560 - 9896.199: 85.8937% ( 110) 00:09:51.197 9896.199 - 9948.839: 86.6968% ( 110) 00:09:51.198 9948.839 - 10001.478: 87.4781% ( 107) 00:09:51.198 10001.478 - 10054.117: 88.2520% ( 106) 00:09:51.198 10054.117 - 10106.757: 88.9749% ( 99) 00:09:51.198 10106.757 - 10159.396: 89.6977% ( 99) 00:09:51.198 10159.396 - 10212.035: 90.3914% ( 95) 00:09:51.198 10212.035 - 10264.675: 91.0120% ( 85) 00:09:51.198 10264.675 - 10317.314: 91.5523% ( 74) 00:09:51.198 10317.314 - 10369.953: 92.0196% ( 64) 00:09:51.198 10369.953 - 10422.593: 92.4211% ( 55) 00:09:51.198 10422.593 - 10475.232: 92.7716% ( 48) 00:09:51.198 10475.232 - 10527.871: 93.1440% ( 51) 00:09:51.198 10527.871 - 10580.511: 93.4287% ( 39) 00:09:51.198 10580.511 - 10633.150: 93.7281% ( 41) 00:09:51.198 10633.150 - 10685.790: 93.9909% ( 36) 00:09:51.198 10685.790 - 10738.429: 94.2611% ( 37) 00:09:51.198 10738.429 - 10791.068: 94.4947% ( 32) 00:09:51.198 10791.068 - 10843.708: 94.7503% ( 35) 00:09:51.198 10843.708 - 10896.347: 94.9839% ( 32) 00:09:51.198 10896.347 - 10948.986: 95.2468% ( 36) 00:09:51.198 10948.986 - 11001.626: 95.4658% ( 30) 00:09:51.198 11001.626 - 11054.265: 95.6484% ( 25) 00:09:51.198 11054.265 - 11106.904: 95.7871% ( 19) 00:09:51.198 11106.904 - 11159.544: 95.9404% ( 21) 00:09:51.198 11159.544 - 11212.183: 96.0938% ( 21) 00:09:51.198 11212.183 - 11264.822: 96.2617% ( 23) 00:09:51.198 11264.822 - 11317.462: 96.4223% ( 22) 00:09:51.198 11317.462 - 11370.101: 96.5756% ( 21) 00:09:51.198 11370.101 - 11422.741: 96.7144% ( 19) 00:09:51.198 11422.741 - 11475.380: 96.8677% ( 21) 00:09:51.198 11475.380 - 11528.019: 96.9991% ( 18) 00:09:51.198 11528.019 - 11580.659: 97.1817% ( 25) 00:09:51.198 11580.659 - 11633.298: 97.3715% ( 26) 00:09:51.198 11633.298 - 11685.937: 97.5102% ( 19) 00:09:51.198 11685.937 - 11738.577: 97.6197% ( 15) 00:09:51.198 11738.577 - 11791.216: 97.7220% ( 14) 00:09:51.198 11791.216 - 11843.855: 97.8023% ( 11) 00:09:51.198 11843.855 - 11896.495: 97.8607% ( 8) 00:09:51.198 11896.495 - 11949.134: 97.9483% ( 12) 00:09:51.198 11949.134 - 12001.773: 98.0359% ( 12) 00:09:51.198 12001.773 - 12054.413: 98.1162% ( 11) 00:09:51.198 12054.413 - 12107.052: 98.2039% ( 12) 00:09:51.198 12107.052 - 12159.692: 98.2915% ( 12) 00:09:51.198 12159.692 - 12212.331: 98.3499% ( 8) 00:09:51.198 12212.331 - 12264.970: 98.4156% ( 9) 00:09:51.198 12264.970 - 12317.610: 98.4886% ( 10) 00:09:51.198 12317.610 - 12370.249: 98.5543% ( 9) 00:09:51.198 12370.249 - 12422.888: 98.6200% ( 9) 00:09:51.198 12422.888 - 12475.528: 98.6857% ( 9) 00:09:51.198 12475.528 - 12528.167: 98.7296% ( 6) 00:09:51.198 12528.167 - 12580.806: 98.7734% ( 6) 00:09:51.198 12580.806 - 12633.446: 98.8099% ( 5) 00:09:51.198 12633.446 - 12686.085: 98.8610% ( 7) 00:09:51.198 12686.085 - 12738.724: 98.8975% ( 5) 00:09:51.198 12738.724 - 12791.364: 98.9413% ( 6) 00:09:51.198 12791.364 - 12844.003: 98.9778% ( 5) 00:09:51.198 12844.003 - 12896.643: 99.0143% ( 5) 00:09:51.198 12896.643 - 12949.282: 99.0435% ( 4) 00:09:51.198 12949.282 - 13001.921: 99.0581% ( 2) 00:09:51.198 13001.921 - 13054.561: 99.0654% ( 1) 00:09:51.198 39374.239 - 39584.797: 99.0873% ( 3) 00:09:51.198 39584.797 - 39795.354: 99.1384% ( 7) 00:09:51.198 39795.354 - 40005.912: 99.1822% ( 6) 00:09:51.198 40005.912 - 40216.469: 99.2334% ( 7) 00:09:51.198 40216.469 - 40427.027: 99.2772% ( 6) 00:09:51.198 40427.027 - 40637.584: 99.3283% ( 7) 00:09:51.198 40637.584 - 40848.141: 99.3721% ( 6) 00:09:51.198 40848.141 - 41058.699: 99.4232% ( 7) 00:09:51.198 41058.699 - 41269.256: 99.4597% ( 5) 00:09:51.198 41269.256 - 41479.814: 99.5108% ( 7) 00:09:51.198 41479.814 - 41690.371: 99.5327% ( 3) 00:09:51.198 46322.635 - 46533.192: 99.5546% ( 3) 00:09:51.198 46533.192 - 46743.749: 99.6057% ( 7) 00:09:51.198 46743.749 - 46954.307: 99.6568% ( 7) 00:09:51.198 46954.307 - 47164.864: 99.7079% ( 7) 00:09:51.198 47164.864 - 47375.422: 99.7518% ( 6) 00:09:51.198 47375.422 - 47585.979: 99.8029% ( 7) 00:09:51.198 47585.979 - 47796.537: 99.8467% ( 6) 00:09:51.198 47796.537 - 48007.094: 99.8905% ( 6) 00:09:51.198 48007.094 - 48217.651: 99.9416% ( 7) 00:09:51.198 48217.651 - 48428.209: 99.9927% ( 7) 00:09:51.198 48428.209 - 48638.766: 100.0000% ( 1) 00:09:51.198 00:09:51.198 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:51.198 ============================================================================== 00:09:51.198 Range in us Cumulative IO count 00:09:51.198 7948.543 - 8001.182: 0.0876% ( 12) 00:09:51.198 8001.182 - 8053.822: 0.5695% ( 66) 00:09:51.198 8053.822 - 8106.461: 1.7304% ( 159) 00:09:51.198 8106.461 - 8159.100: 3.3659% ( 224) 00:09:51.198 8159.100 - 8211.740: 5.9725% ( 357) 00:09:51.198 8211.740 - 8264.379: 9.3093% ( 457) 00:09:51.198 8264.379 - 8317.018: 12.8797% ( 489) 00:09:51.198 8317.018 - 8369.658: 16.8370% ( 542) 00:09:51.198 8369.658 - 8422.297: 20.6557% ( 523) 00:09:51.198 8422.297 - 8474.937: 24.6495% ( 547) 00:09:51.198 8474.937 - 8527.576: 28.7894% ( 567) 00:09:51.198 8527.576 - 8580.215: 32.9220% ( 566) 00:09:51.198 8580.215 - 8632.855: 37.2737% ( 596) 00:09:51.198 8632.855 - 8685.494: 41.5085% ( 580) 00:09:51.198 8685.494 - 8738.133: 45.8163% ( 590) 00:09:51.198 8738.133 - 8790.773: 50.1022% ( 587) 00:09:51.198 8790.773 - 8843.412: 54.4758% ( 599) 00:09:51.198 8843.412 - 8896.051: 58.9004% ( 606) 00:09:51.198 8896.051 - 8948.691: 62.9381% ( 553) 00:09:51.198 8948.691 - 9001.330: 66.5815% ( 499) 00:09:51.198 9001.330 - 9053.969: 69.6116% ( 415) 00:09:51.198 9053.969 - 9106.609: 71.9845% ( 325) 00:09:51.198 9106.609 - 9159.248: 73.6200% ( 224) 00:09:51.198 9159.248 - 9211.888: 74.8102% ( 163) 00:09:51.198 9211.888 - 9264.527: 75.7739% ( 132) 00:09:51.198 9264.527 - 9317.166: 76.8327% ( 145) 00:09:51.198 9317.166 - 9369.806: 77.7672% ( 128) 00:09:51.198 9369.806 - 9422.445: 78.5631% ( 109) 00:09:51.198 9422.445 - 9475.084: 79.3662% ( 110) 00:09:51.198 9475.084 - 9527.724: 80.1475% ( 107) 00:09:51.198 9527.724 - 9580.363: 80.9579% ( 111) 00:09:51.198 9580.363 - 9633.002: 81.8268% ( 119) 00:09:51.198 9633.002 - 9685.642: 82.6884% ( 118) 00:09:51.198 9685.642 - 9738.281: 83.5426% ( 117) 00:09:51.198 9738.281 - 9790.920: 84.3823% ( 115) 00:09:51.198 9790.920 - 9843.560: 85.2293% ( 116) 00:09:51.198 9843.560 - 9896.199: 86.0105% ( 107) 00:09:51.198 9896.199 - 9948.839: 86.7553% ( 102) 00:09:51.198 9948.839 - 10001.478: 87.4635% ( 97) 00:09:51.198 10001.478 - 10054.117: 88.1863% ( 99) 00:09:51.198 10054.117 - 10106.757: 88.8873% ( 96) 00:09:51.198 10106.757 - 10159.396: 89.5663% ( 93) 00:09:51.198 10159.396 - 10212.035: 90.2380% ( 92) 00:09:51.198 10212.035 - 10264.675: 90.8586% ( 85) 00:09:51.198 10264.675 - 10317.314: 91.3770% ( 71) 00:09:51.198 10317.314 - 10369.953: 91.8370% ( 63) 00:09:51.198 10369.953 - 10422.593: 92.2605% ( 58) 00:09:51.198 10422.593 - 10475.232: 92.6548% ( 54) 00:09:51.198 10475.232 - 10527.871: 92.9249% ( 37) 00:09:51.198 10527.871 - 10580.511: 93.1951% ( 37) 00:09:51.198 10580.511 - 10633.150: 93.4433% ( 34) 00:09:51.198 10633.150 - 10685.790: 93.7281% ( 39) 00:09:51.198 10685.790 - 10738.429: 93.9471% ( 30) 00:09:51.198 10738.429 - 10791.068: 94.2246% ( 38) 00:09:51.198 10791.068 - 10843.708: 94.4947% ( 37) 00:09:51.198 10843.708 - 10896.347: 94.7795% ( 39) 00:09:51.198 10896.347 - 10948.986: 95.0277% ( 34) 00:09:51.198 10948.986 - 11001.626: 95.2614% ( 32) 00:09:51.198 11001.626 - 11054.265: 95.5096% ( 34) 00:09:51.198 11054.265 - 11106.904: 95.7068% ( 27) 00:09:51.198 11106.904 - 11159.544: 95.9039% ( 27) 00:09:51.198 11159.544 - 11212.183: 96.1011% ( 27) 00:09:51.198 11212.183 - 11264.822: 96.2763% ( 24) 00:09:51.198 11264.822 - 11317.462: 96.4661% ( 26) 00:09:51.198 11317.462 - 11370.101: 96.6341% ( 23) 00:09:51.198 11370.101 - 11422.741: 96.8093% ( 24) 00:09:51.198 11422.741 - 11475.380: 96.9918% ( 25) 00:09:51.198 11475.380 - 11528.019: 97.1744% ( 25) 00:09:51.198 11528.019 - 11580.659: 97.3569% ( 25) 00:09:51.198 11580.659 - 11633.298: 97.5102% ( 21) 00:09:51.198 11633.298 - 11685.937: 97.6343% ( 17) 00:09:51.198 11685.937 - 11738.577: 97.7220% ( 12) 00:09:51.198 11738.577 - 11791.216: 97.8023% ( 11) 00:09:51.198 11791.216 - 11843.855: 97.8534% ( 7) 00:09:51.198 11843.855 - 11896.495: 97.9045% ( 7) 00:09:51.198 11896.495 - 11949.134: 97.9556% ( 7) 00:09:51.198 11949.134 - 12001.773: 98.0213% ( 9) 00:09:51.198 12001.773 - 12054.413: 98.0870% ( 9) 00:09:51.198 12054.413 - 12107.052: 98.1527% ( 9) 00:09:51.198 12107.052 - 12159.692: 98.2258% ( 10) 00:09:51.198 12159.692 - 12212.331: 98.2769% ( 7) 00:09:51.198 12212.331 - 12264.970: 98.3426% ( 9) 00:09:51.198 12264.970 - 12317.610: 98.4083% ( 9) 00:09:51.198 12317.610 - 12370.249: 98.4740% ( 9) 00:09:51.198 12370.249 - 12422.888: 98.5397% ( 9) 00:09:51.198 12422.888 - 12475.528: 98.5981% ( 8) 00:09:51.198 12475.528 - 12528.167: 98.6273% ( 4) 00:09:51.198 12528.167 - 12580.806: 98.6711% ( 6) 00:09:51.198 12580.806 - 12633.446: 98.7077% ( 5) 00:09:51.198 12633.446 - 12686.085: 98.7588% ( 7) 00:09:51.198 12686.085 - 12738.724: 98.8026% ( 6) 00:09:51.198 12738.724 - 12791.364: 98.8391% ( 5) 00:09:51.198 12791.364 - 12844.003: 98.8829% ( 6) 00:09:51.198 12844.003 - 12896.643: 98.9194% ( 5) 00:09:51.198 12896.643 - 12949.282: 98.9413% ( 3) 00:09:51.198 12949.282 - 13001.921: 98.9559% ( 2) 00:09:51.198 13001.921 - 13054.561: 98.9705% ( 2) 00:09:51.198 13054.561 - 13107.200: 98.9851% ( 2) 00:09:51.198 13107.200 - 13159.839: 98.9997% ( 2) 00:09:51.198 13159.839 - 13212.479: 99.0216% ( 3) 00:09:51.198 13212.479 - 13265.118: 99.0362% ( 2) 00:09:51.198 13265.118 - 13317.757: 99.0508% ( 2) 00:09:51.198 13317.757 - 13370.397: 99.0654% ( 2) 00:09:51.199 37058.108 - 37268.665: 99.1092% ( 6) 00:09:51.199 37268.665 - 37479.222: 99.1530% ( 6) 00:09:51.199 37479.222 - 37689.780: 99.2041% ( 7) 00:09:51.199 37689.780 - 37900.337: 99.2480% ( 6) 00:09:51.199 37900.337 - 38110.895: 99.2991% ( 7) 00:09:51.199 38110.895 - 38321.452: 99.3429% ( 6) 00:09:51.199 38321.452 - 38532.010: 99.3940% ( 7) 00:09:51.199 38532.010 - 38742.567: 99.4451% ( 7) 00:09:51.199 38742.567 - 38953.124: 99.4889% ( 6) 00:09:51.199 38953.124 - 39163.682: 99.5327% ( 6) 00:09:51.199 43795.945 - 44006.503: 99.5546% ( 3) 00:09:51.199 44006.503 - 44217.060: 99.5984% ( 6) 00:09:51.199 44217.060 - 44427.618: 99.6422% ( 6) 00:09:51.199 44427.618 - 44638.175: 99.6860% ( 6) 00:09:51.199 44638.175 - 44848.733: 99.7298% ( 6) 00:09:51.199 44848.733 - 45059.290: 99.7810% ( 7) 00:09:51.199 45059.290 - 45269.847: 99.8248% ( 6) 00:09:51.199 45269.847 - 45480.405: 99.8759% ( 7) 00:09:51.199 45480.405 - 45690.962: 99.9270% ( 7) 00:09:51.199 45690.962 - 45901.520: 99.9708% ( 6) 00:09:51.199 45901.520 - 46112.077: 100.0000% ( 4) 00:09:51.199 00:09:51.199 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:51.199 ============================================================================== 00:09:51.199 Range in us Cumulative IO count 00:09:51.199 7895.904 - 7948.543: 0.0073% ( 1) 00:09:51.199 7948.543 - 8001.182: 0.0727% ( 9) 00:09:51.199 8001.182 - 8053.822: 0.6250% ( 76) 00:09:51.199 8053.822 - 8106.461: 1.7151% ( 150) 00:09:51.199 8106.461 - 8159.100: 3.4302% ( 236) 00:09:51.199 8159.100 - 8211.740: 5.9157% ( 342) 00:09:51.199 8211.740 - 8264.379: 9.1061% ( 439) 00:09:51.199 8264.379 - 8317.018: 12.9797% ( 533) 00:09:51.199 8317.018 - 8369.658: 16.7078% ( 513) 00:09:51.199 8369.658 - 8422.297: 20.6105% ( 537) 00:09:51.199 8422.297 - 8474.937: 24.5058% ( 536) 00:09:51.199 8474.937 - 8527.576: 28.6265% ( 567) 00:09:51.199 8527.576 - 8580.215: 32.7108% ( 562) 00:09:51.199 8580.215 - 8632.855: 36.9913% ( 589) 00:09:51.199 8632.855 - 8685.494: 41.3081% ( 594) 00:09:51.199 8685.494 - 8738.133: 45.5015% ( 577) 00:09:51.199 8738.133 - 8790.773: 49.8983% ( 605) 00:09:51.199 8790.773 - 8843.412: 54.2733% ( 602) 00:09:51.199 8843.412 - 8896.051: 58.6555% ( 603) 00:09:51.199 8896.051 - 8948.691: 62.4855% ( 527) 00:09:51.199 8948.691 - 9001.330: 66.0465% ( 490) 00:09:51.199 9001.330 - 9053.969: 69.1206% ( 423) 00:09:51.199 9053.969 - 9106.609: 71.5116% ( 329) 00:09:51.199 9106.609 - 9159.248: 73.1177% ( 221) 00:09:51.199 9159.248 - 9211.888: 74.4331% ( 181) 00:09:51.199 9211.888 - 9264.527: 75.4797% ( 144) 00:09:51.199 9264.527 - 9317.166: 76.5625% ( 149) 00:09:51.199 9317.166 - 9369.806: 77.5291% ( 133) 00:09:51.199 9369.806 - 9422.445: 78.5029% ( 134) 00:09:51.199 9422.445 - 9475.084: 79.3532% ( 117) 00:09:51.199 9475.084 - 9527.724: 80.1453% ( 109) 00:09:51.199 9527.724 - 9580.363: 81.0320% ( 122) 00:09:51.199 9580.363 - 9633.002: 81.9767% ( 130) 00:09:51.199 9633.002 - 9685.642: 82.7907% ( 112) 00:09:51.199 9685.642 - 9738.281: 83.6919% ( 124) 00:09:51.199 9738.281 - 9790.920: 84.4404% ( 103) 00:09:51.199 9790.920 - 9843.560: 85.2035% ( 105) 00:09:51.199 9843.560 - 9896.199: 85.9375% ( 101) 00:09:51.199 9896.199 - 9948.839: 86.6279% ( 95) 00:09:51.199 9948.839 - 10001.478: 87.3256% ( 96) 00:09:51.199 10001.478 - 10054.117: 88.0378% ( 98) 00:09:51.199 10054.117 - 10106.757: 88.6410% ( 83) 00:09:51.199 10106.757 - 10159.396: 89.2515% ( 84) 00:09:51.199 10159.396 - 10212.035: 89.8692% ( 85) 00:09:51.199 10212.035 - 10264.675: 90.4070% ( 74) 00:09:51.199 10264.675 - 10317.314: 90.9448% ( 74) 00:09:51.199 10317.314 - 10369.953: 91.5189% ( 79) 00:09:51.199 10369.953 - 10422.593: 91.9331% ( 57) 00:09:51.199 10422.593 - 10475.232: 92.2892% ( 49) 00:09:51.199 10475.232 - 10527.871: 92.6090% ( 44) 00:09:51.199 10527.871 - 10580.511: 92.9288% ( 44) 00:09:51.199 10580.511 - 10633.150: 93.2195% ( 40) 00:09:51.199 10633.150 - 10685.790: 93.4448% ( 31) 00:09:51.199 10685.790 - 10738.429: 93.7791% ( 46) 00:09:51.199 10738.429 - 10791.068: 94.0189% ( 33) 00:09:51.199 10791.068 - 10843.708: 94.2660% ( 34) 00:09:51.199 10843.708 - 10896.347: 94.5640% ( 41) 00:09:51.199 10896.347 - 10948.986: 94.8692% ( 42) 00:09:51.199 10948.986 - 11001.626: 95.1308% ( 36) 00:09:51.199 11001.626 - 11054.265: 95.3561% ( 31) 00:09:51.199 11054.265 - 11106.904: 95.5451% ( 26) 00:09:51.199 11106.904 - 11159.544: 95.7485% ( 28) 00:09:51.199 11159.544 - 11212.183: 95.9520% ( 28) 00:09:51.199 11212.183 - 11264.822: 96.1483% ( 27) 00:09:51.199 11264.822 - 11317.462: 96.3299% ( 25) 00:09:51.199 11317.462 - 11370.101: 96.5044% ( 24) 00:09:51.199 11370.101 - 11422.741: 96.6860% ( 25) 00:09:51.199 11422.741 - 11475.380: 96.8605% ( 24) 00:09:51.199 11475.380 - 11528.019: 97.0203% ( 22) 00:09:51.199 11528.019 - 11580.659: 97.1875% ( 23) 00:09:51.199 11580.659 - 11633.298: 97.3401% ( 21) 00:09:51.199 11633.298 - 11685.937: 97.4491% ( 15) 00:09:51.199 11685.937 - 11738.577: 97.5581% ( 15) 00:09:51.199 11738.577 - 11791.216: 97.6163% ( 8) 00:09:51.199 11791.216 - 11843.855: 97.6526% ( 5) 00:09:51.199 11843.855 - 11896.495: 97.7108% ( 8) 00:09:51.199 11896.495 - 11949.134: 97.7616% ( 7) 00:09:51.199 11949.134 - 12001.773: 97.8052% ( 6) 00:09:51.199 12001.773 - 12054.413: 97.8488% ( 6) 00:09:51.199 12054.413 - 12107.052: 97.9070% ( 8) 00:09:51.199 12107.052 - 12159.692: 97.9578% ( 7) 00:09:51.199 12159.692 - 12212.331: 98.0378% ( 11) 00:09:51.199 12212.331 - 12264.970: 98.0887% ( 7) 00:09:51.199 12264.970 - 12317.610: 98.1541% ( 9) 00:09:51.199 12317.610 - 12370.249: 98.2267% ( 10) 00:09:51.199 12370.249 - 12422.888: 98.2922% ( 9) 00:09:51.199 12422.888 - 12475.528: 98.3576% ( 9) 00:09:51.199 12475.528 - 12528.167: 98.4230% ( 9) 00:09:51.199 12528.167 - 12580.806: 98.4956% ( 10) 00:09:51.199 12580.806 - 12633.446: 98.5538% ( 8) 00:09:51.199 12633.446 - 12686.085: 98.6192% ( 9) 00:09:51.199 12686.085 - 12738.724: 98.6846% ( 9) 00:09:51.199 12738.724 - 12791.364: 98.7500% ( 9) 00:09:51.199 12791.364 - 12844.003: 98.8081% ( 8) 00:09:51.199 12844.003 - 12896.643: 98.8445% ( 5) 00:09:51.199 12896.643 - 12949.282: 98.8590% ( 2) 00:09:51.199 12949.282 - 13001.921: 98.8735% ( 2) 00:09:51.199 13001.921 - 13054.561: 98.8881% ( 2) 00:09:51.199 13054.561 - 13107.200: 98.9099% ( 3) 00:09:51.199 13107.200 - 13159.839: 98.9244% ( 2) 00:09:51.199 13159.839 - 13212.479: 98.9390% ( 2) 00:09:51.199 13212.479 - 13265.118: 98.9535% ( 2) 00:09:51.199 13265.118 - 13317.757: 98.9753% ( 3) 00:09:51.199 13317.757 - 13370.397: 98.9826% ( 1) 00:09:51.199 13370.397 - 13423.036: 99.0044% ( 3) 00:09:51.199 13423.036 - 13475.676: 99.0262% ( 3) 00:09:51.199 13475.676 - 13580.954: 99.0480% ( 3) 00:09:51.199 13580.954 - 13686.233: 99.0698% ( 3) 00:09:51.199 29267.483 - 29478.040: 99.1134% ( 6) 00:09:51.199 29478.040 - 29688.598: 99.1570% ( 6) 00:09:51.199 29688.598 - 29899.155: 99.2006% ( 6) 00:09:51.199 29899.155 - 30109.712: 99.2515% ( 7) 00:09:51.199 30109.712 - 30320.270: 99.2951% ( 6) 00:09:51.199 30320.270 - 30530.827: 99.3459% ( 7) 00:09:51.199 30530.827 - 30741.385: 99.3968% ( 7) 00:09:51.199 30741.385 - 30951.942: 99.4404% ( 6) 00:09:51.199 30951.942 - 31162.500: 99.4913% ( 7) 00:09:51.199 31162.500 - 31373.057: 99.5349% ( 6) 00:09:51.199 36215.878 - 36426.435: 99.5712% ( 5) 00:09:51.199 36426.435 - 36636.993: 99.6148% ( 6) 00:09:51.199 36636.993 - 36847.550: 99.6584% ( 6) 00:09:51.199 36847.550 - 37058.108: 99.7093% ( 7) 00:09:51.199 37058.108 - 37268.665: 99.7529% ( 6) 00:09:51.199 37268.665 - 37479.222: 99.8038% ( 7) 00:09:51.199 37479.222 - 37689.780: 99.8474% ( 6) 00:09:51.199 37689.780 - 37900.337: 99.8910% ( 6) 00:09:51.199 37900.337 - 38110.895: 99.9419% ( 7) 00:09:51.199 38110.895 - 38321.452: 99.9855% ( 6) 00:09:51.199 38321.452 - 38532.010: 100.0000% ( 2) 00:09:51.199 00:09:51.199 03:20:14 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:52.578 Initializing NVMe Controllers 00:09:52.578 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:52.578 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:52.578 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:52.578 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:52.578 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:52.578 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:52.578 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:52.578 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:52.578 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:52.578 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:52.578 Initialization complete. Launching workers. 00:09:52.578 ======================================================== 00:09:52.578 Latency(us) 00:09:52.578 Device Information : IOPS MiB/s Average min max 00:09:52.578 PCIE (0000:00:10.0) NSID 1 from core 0: 14674.90 171.97 8741.97 6890.37 46468.75 00:09:52.578 PCIE (0000:00:11.0) NSID 1 from core 0: 14674.90 171.97 8724.67 7012.38 44492.39 00:09:52.578 PCIE (0000:00:13.0) NSID 1 from core 0: 14674.90 171.97 8707.36 6901.32 43254.32 00:09:52.578 PCIE (0000:00:12.0) NSID 1 from core 0: 14674.90 171.97 8689.77 7166.08 41443.21 00:09:52.578 PCIE (0000:00:12.0) NSID 2 from core 0: 14674.90 171.97 8672.26 6901.11 39600.64 00:09:52.578 PCIE (0000:00:12.0) NSID 3 from core 0: 14738.71 172.72 8617.79 6946.12 29275.07 00:09:52.578 ======================================================== 00:09:52.578 Total : 88113.23 1032.58 8692.25 6890.37 46468.75 00:09:52.578 00:09:52.578 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:52.578 ================================================================================= 00:09:52.578 1.00000% : 7316.871us 00:09:52.578 10.00000% : 7685.346us 00:09:52.578 25.00000% : 8001.182us 00:09:52.578 50.00000% : 8369.658us 00:09:52.578 75.00000% : 8843.412us 00:09:52.578 90.00000% : 9317.166us 00:09:52.578 95.00000% : 9738.281us 00:09:52.578 98.00000% : 11106.904us 00:09:52.578 99.00000% : 16528.758us 00:09:52.578 99.50000% : 36215.878us 00:09:52.578 99.90000% : 45901.520us 00:09:52.578 99.99000% : 46533.192us 00:09:52.578 99.99900% : 46533.192us 00:09:52.578 99.99990% : 46533.192us 00:09:52.578 99.99999% : 46533.192us 00:09:52.578 00:09:52.578 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:52.578 ================================================================================= 00:09:52.578 1.00000% : 7422.149us 00:09:52.578 10.00000% : 7737.986us 00:09:52.578 25.00000% : 8001.182us 00:09:52.579 50.00000% : 8369.658us 00:09:52.579 75.00000% : 8843.412us 00:09:52.579 90.00000% : 9264.527us 00:09:52.579 95.00000% : 9685.642us 00:09:52.579 98.00000% : 11633.298us 00:09:52.579 99.00000% : 15581.250us 00:09:52.579 99.50000% : 34952.533us 00:09:52.579 99.90000% : 44006.503us 00:09:52.579 99.99000% : 44638.175us 00:09:52.579 99.99900% : 44638.175us 00:09:52.579 99.99990% : 44638.175us 00:09:52.579 99.99999% : 44638.175us 00:09:52.579 00:09:52.579 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:52.579 ================================================================================= 00:09:52.579 1.00000% : 7316.871us 00:09:52.579 10.00000% : 7737.986us 00:09:52.579 25.00000% : 7948.543us 00:09:52.579 50.00000% : 8369.658us 00:09:52.579 75.00000% : 8843.412us 00:09:52.579 90.00000% : 9264.527us 00:09:52.579 95.00000% : 9685.642us 00:09:52.579 98.00000% : 12107.052us 00:09:52.579 99.00000% : 15265.414us 00:09:52.579 99.50000% : 33057.516us 00:09:52.579 99.90000% : 42743.158us 00:09:52.579 99.99000% : 43374.831us 00:09:52.579 99.99900% : 43374.831us 00:09:52.579 99.99990% : 43374.831us 00:09:52.579 99.99999% : 43374.831us 00:09:52.579 00:09:52.579 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:52.579 ================================================================================= 00:09:52.579 1.00000% : 7422.149us 00:09:52.579 10.00000% : 7737.986us 00:09:52.579 25.00000% : 7948.543us 00:09:52.579 50.00000% : 8369.658us 00:09:52.579 75.00000% : 8843.412us 00:09:52.579 90.00000% : 9264.527us 00:09:52.579 95.00000% : 9738.281us 00:09:52.579 98.00000% : 12107.052us 00:09:52.579 99.00000% : 15160.135us 00:09:52.579 99.50000% : 30951.942us 00:09:52.579 99.90000% : 41058.699us 00:09:52.579 99.99000% : 41479.814us 00:09:52.579 99.99900% : 41479.814us 00:09:52.579 99.99990% : 41479.814us 00:09:52.579 99.99999% : 41479.814us 00:09:52.579 00:09:52.579 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:52.579 ================================================================================= 00:09:52.579 1.00000% : 7422.149us 00:09:52.579 10.00000% : 7737.986us 00:09:52.579 25.00000% : 8001.182us 00:09:52.579 50.00000% : 8369.658us 00:09:52.579 75.00000% : 8843.412us 00:09:52.579 90.00000% : 9317.166us 00:09:52.579 95.00000% : 9790.920us 00:09:52.579 98.00000% : 11791.216us 00:09:52.579 99.00000% : 15054.856us 00:09:52.579 99.50000% : 29056.925us 00:09:52.579 99.90000% : 39163.682us 00:09:52.579 99.99000% : 39584.797us 00:09:52.579 99.99900% : 39795.354us 00:09:52.579 99.99990% : 39795.354us 00:09:52.579 99.99999% : 39795.354us 00:09:52.579 00:09:52.579 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:52.579 ================================================================================= 00:09:52.579 1.00000% : 7369.510us 00:09:52.579 10.00000% : 7737.986us 00:09:52.579 25.00000% : 8001.182us 00:09:52.579 50.00000% : 8369.658us 00:09:52.579 75.00000% : 8790.773us 00:09:52.579 90.00000% : 9317.166us 00:09:52.579 95.00000% : 9896.199us 00:09:52.579 98.00000% : 13265.118us 00:09:52.579 99.00000% : 16107.643us 00:09:52.579 99.50000% : 19897.677us 00:09:52.579 99.90000% : 28846.368us 00:09:52.579 99.99000% : 29267.483us 00:09:52.579 99.99900% : 29478.040us 00:09:52.579 99.99990% : 29478.040us 00:09:52.579 99.99999% : 29478.040us 00:09:52.579 00:09:52.579 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:52.579 ============================================================================== 00:09:52.579 Range in us Cumulative IO count 00:09:52.579 6843.116 - 6895.756: 0.0068% ( 1) 00:09:52.579 6895.756 - 6948.395: 0.0136% ( 1) 00:09:52.579 6948.395 - 7001.035: 0.0204% ( 1) 00:09:52.579 7001.035 - 7053.674: 0.0543% ( 5) 00:09:52.579 7053.674 - 7106.313: 0.1834% ( 19) 00:09:52.579 7106.313 - 7158.953: 0.3057% ( 18) 00:09:52.579 7158.953 - 7211.592: 0.5503% ( 36) 00:09:52.579 7211.592 - 7264.231: 0.8288% ( 41) 00:09:52.579 7264.231 - 7316.871: 1.3179% ( 72) 00:09:52.579 7316.871 - 7369.510: 2.0448% ( 107) 00:09:52.579 7369.510 - 7422.149: 2.9484% ( 133) 00:09:52.579 7422.149 - 7474.789: 4.4090% ( 215) 00:09:52.579 7474.789 - 7527.428: 6.0190% ( 237) 00:09:52.579 7527.428 - 7580.067: 7.6834% ( 245) 00:09:52.579 7580.067 - 7632.707: 9.4090% ( 254) 00:09:52.579 7632.707 - 7685.346: 11.6236% ( 326) 00:09:52.579 7685.346 - 7737.986: 13.5938% ( 290) 00:09:52.579 7737.986 - 7790.625: 16.1277% ( 373) 00:09:52.579 7790.625 - 7843.264: 18.9266% ( 412) 00:09:52.579 7843.264 - 7895.904: 21.7799% ( 420) 00:09:52.579 7895.904 - 7948.543: 24.4769% ( 397) 00:09:52.579 7948.543 - 8001.182: 27.4049% ( 431) 00:09:52.579 8001.182 - 8053.822: 30.6861% ( 483) 00:09:52.579 8053.822 - 8106.461: 34.0014% ( 488) 00:09:52.579 8106.461 - 8159.100: 37.6359% ( 535) 00:09:52.579 8159.100 - 8211.740: 41.2840% ( 537) 00:09:52.579 8211.740 - 8264.379: 44.5992% ( 488) 00:09:52.579 8264.379 - 8317.018: 47.9484% ( 493) 00:09:52.579 8317.018 - 8369.658: 51.2908% ( 492) 00:09:52.579 8369.658 - 8422.297: 54.8234% ( 520) 00:09:52.579 8422.297 - 8474.937: 57.7921% ( 437) 00:09:52.579 8474.937 - 8527.576: 60.3601% ( 378) 00:09:52.579 8527.576 - 8580.215: 63.2541% ( 426) 00:09:52.579 8580.215 - 8632.855: 65.9307% ( 394) 00:09:52.579 8632.855 - 8685.494: 68.6277% ( 397) 00:09:52.579 8685.494 - 8738.133: 71.6916% ( 451) 00:09:52.579 8738.133 - 8790.773: 74.2799% ( 381) 00:09:52.579 8790.773 - 8843.412: 76.5149% ( 329) 00:09:52.579 8843.412 - 8896.051: 78.4918% ( 291) 00:09:52.579 8896.051 - 8948.691: 80.5435% ( 302) 00:09:52.579 8948.691 - 9001.330: 82.5272% ( 292) 00:09:52.579 9001.330 - 9053.969: 84.2731% ( 257) 00:09:52.579 9053.969 - 9106.609: 85.9647% ( 249) 00:09:52.579 9106.609 - 9159.248: 87.3098% ( 198) 00:09:52.579 9159.248 - 9211.888: 88.4986% ( 175) 00:09:52.579 9211.888 - 9264.527: 89.4905% ( 146) 00:09:52.579 9264.527 - 9317.166: 90.1766% ( 101) 00:09:52.579 9317.166 - 9369.806: 90.8220% ( 95) 00:09:52.579 9369.806 - 9422.445: 91.5353% ( 105) 00:09:52.579 9422.445 - 9475.084: 92.3573% ( 121) 00:09:52.579 9475.084 - 9527.724: 92.9416% ( 86) 00:09:52.579 9527.724 - 9580.363: 93.4783% ( 79) 00:09:52.579 9580.363 - 9633.002: 93.9606% ( 71) 00:09:52.579 9633.002 - 9685.642: 94.5109% ( 81) 00:09:52.579 9685.642 - 9738.281: 95.0476% ( 79) 00:09:52.579 9738.281 - 9790.920: 95.4891% ( 65) 00:09:52.579 9790.920 - 9843.560: 95.9307% ( 65) 00:09:52.579 9843.560 - 9896.199: 96.2636% ( 49) 00:09:52.579 9896.199 - 9948.839: 96.4130% ( 22) 00:09:52.579 9948.839 - 10001.478: 96.5557% ( 21) 00:09:52.579 10001.478 - 10054.117: 96.6712% ( 17) 00:09:52.579 10054.117 - 10106.757: 96.7595% ( 13) 00:09:52.579 10106.757 - 10159.396: 96.8750% ( 17) 00:09:52.579 10159.396 - 10212.035: 97.0245% ( 22) 00:09:52.579 10212.035 - 10264.675: 97.1399% ( 17) 00:09:52.579 10264.675 - 10317.314: 97.2351% ( 14) 00:09:52.579 10317.314 - 10369.953: 97.2894% ( 8) 00:09:52.579 10369.953 - 10422.593: 97.3505% ( 9) 00:09:52.579 10422.593 - 10475.232: 97.3573% ( 1) 00:09:52.579 10475.232 - 10527.871: 97.3913% ( 5) 00:09:52.579 10527.871 - 10580.511: 97.4389% ( 7) 00:09:52.579 10580.511 - 10633.150: 97.5000% ( 9) 00:09:52.579 10633.150 - 10685.790: 97.5951% ( 14) 00:09:52.579 10685.790 - 10738.429: 97.6766% ( 12) 00:09:52.579 10738.429 - 10791.068: 97.7853% ( 16) 00:09:52.579 10791.068 - 10843.708: 97.8601% ( 11) 00:09:52.579 10843.708 - 10896.347: 97.8872% ( 4) 00:09:52.579 10896.347 - 10948.986: 97.9144% ( 4) 00:09:52.579 10948.986 - 11001.626: 97.9416% ( 4) 00:09:52.579 11001.626 - 11054.265: 97.9823% ( 6) 00:09:52.579 11054.265 - 11106.904: 98.0163% ( 5) 00:09:52.579 11106.904 - 11159.544: 98.0367% ( 3) 00:09:52.579 11159.544 - 11212.183: 98.0639% ( 4) 00:09:52.579 11212.183 - 11264.822: 98.0910% ( 4) 00:09:52.579 11264.822 - 11317.462: 98.0978% ( 1) 00:09:52.579 11370.101 - 11422.741: 98.1182% ( 3) 00:09:52.579 11422.741 - 11475.380: 98.1454% ( 4) 00:09:52.579 11475.380 - 11528.019: 98.1726% ( 4) 00:09:52.579 11528.019 - 11580.659: 98.2201% ( 7) 00:09:52.579 11580.659 - 11633.298: 98.2609% ( 6) 00:09:52.579 13475.676 - 13580.954: 98.2677% ( 1) 00:09:52.579 13580.954 - 13686.233: 98.3152% ( 7) 00:09:52.579 13686.233 - 13791.512: 98.4443% ( 19) 00:09:52.579 13791.512 - 13896.790: 98.5258% ( 12) 00:09:52.579 13896.790 - 14002.069: 98.5530% ( 4) 00:09:52.579 14002.069 - 14107.348: 98.5870% ( 5) 00:09:52.579 14107.348 - 14212.627: 98.6141% ( 4) 00:09:52.579 14212.627 - 14317.905: 98.6209% ( 1) 00:09:52.579 14317.905 - 14423.184: 98.6481% ( 4) 00:09:52.579 14423.184 - 14528.463: 98.6617% ( 2) 00:09:52.579 14528.463 - 14633.741: 98.6753% ( 2) 00:09:52.579 14633.741 - 14739.020: 98.6889% ( 2) 00:09:52.579 15054.856 - 15160.135: 98.6957% ( 1) 00:09:52.579 15581.250 - 15686.529: 98.7024% ( 1) 00:09:52.579 15686.529 - 15791.807: 98.7704% ( 10) 00:09:52.579 15791.807 - 15897.086: 98.8451% ( 11) 00:09:52.579 15897.086 - 16002.365: 98.8519% ( 1) 00:09:52.579 16002.365 - 16107.643: 98.8655% ( 2) 00:09:52.579 16107.643 - 16212.922: 98.8927% ( 4) 00:09:52.579 16212.922 - 16318.201: 98.9130% ( 3) 00:09:52.579 16318.201 - 16423.480: 98.9470% ( 5) 00:09:52.579 16423.480 - 16528.758: 99.0014% ( 8) 00:09:52.579 16528.758 - 16634.037: 99.0557% ( 8) 00:09:52.579 16634.037 - 16739.316: 99.1101% ( 8) 00:09:52.580 16739.316 - 16844.594: 99.1304% ( 3) 00:09:52.580 34531.418 - 34741.976: 99.1440% ( 2) 00:09:52.580 34741.976 - 34952.533: 99.1916% ( 7) 00:09:52.580 34952.533 - 35163.091: 99.2527% ( 9) 00:09:52.580 35163.091 - 35373.648: 99.3207% ( 10) 00:09:52.580 35373.648 - 35584.206: 99.3818% ( 9) 00:09:52.580 35584.206 - 35794.763: 99.4497% ( 10) 00:09:52.580 35794.763 - 36005.320: 99.4905% ( 6) 00:09:52.580 36005.320 - 36215.878: 99.5312% ( 6) 00:09:52.580 36215.878 - 36426.435: 99.5652% ( 5) 00:09:52.580 42322.043 - 42532.601: 99.5788% ( 2) 00:09:52.580 42532.601 - 42743.158: 99.5924% ( 2) 00:09:52.580 43374.831 - 43585.388: 99.5992% ( 1) 00:09:52.580 43585.388 - 43795.945: 99.6196% ( 3) 00:09:52.580 43795.945 - 44006.503: 99.6603% ( 6) 00:09:52.580 44006.503 - 44217.060: 99.6807% ( 3) 00:09:52.580 44217.060 - 44427.618: 99.7079% ( 4) 00:09:52.580 44427.618 - 44638.175: 99.7351% ( 4) 00:09:52.580 44638.175 - 44848.733: 99.7622% ( 4) 00:09:52.580 44848.733 - 45059.290: 99.7894% ( 4) 00:09:52.580 45059.290 - 45269.847: 99.8302% ( 6) 00:09:52.580 45269.847 - 45480.405: 99.8641% ( 5) 00:09:52.580 45480.405 - 45690.962: 99.8913% ( 4) 00:09:52.580 45690.962 - 45901.520: 99.9253% ( 5) 00:09:52.580 45901.520 - 46112.077: 99.9592% ( 5) 00:09:52.580 46112.077 - 46322.635: 99.9864% ( 4) 00:09:52.580 46322.635 - 46533.192: 100.0000% ( 2) 00:09:52.580 00:09:52.580 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:52.580 ============================================================================== 00:09:52.580 Range in us Cumulative IO count 00:09:52.580 7001.035 - 7053.674: 0.0204% ( 3) 00:09:52.580 7053.674 - 7106.313: 0.0543% ( 5) 00:09:52.580 7106.313 - 7158.953: 0.0815% ( 4) 00:09:52.580 7158.953 - 7211.592: 0.1698% ( 13) 00:09:52.580 7211.592 - 7264.231: 0.4552% ( 42) 00:09:52.580 7264.231 - 7316.871: 0.6046% ( 22) 00:09:52.580 7316.871 - 7369.510: 0.9103% ( 45) 00:09:52.580 7369.510 - 7422.149: 1.4810% ( 84) 00:09:52.580 7422.149 - 7474.789: 2.1128% ( 93) 00:09:52.580 7474.789 - 7527.428: 2.8872% ( 114) 00:09:52.580 7527.428 - 7580.067: 4.3478% ( 215) 00:09:52.580 7580.067 - 7632.707: 6.2228% ( 276) 00:09:52.580 7632.707 - 7685.346: 8.3628% ( 315) 00:09:52.580 7685.346 - 7737.986: 10.7337% ( 349) 00:09:52.580 7737.986 - 7790.625: 13.4579% ( 401) 00:09:52.580 7790.625 - 7843.264: 16.4470% ( 440) 00:09:52.580 7843.264 - 7895.904: 20.0000% ( 523) 00:09:52.580 7895.904 - 7948.543: 24.1848% ( 616) 00:09:52.580 7948.543 - 8001.182: 27.3777% ( 470) 00:09:52.580 8001.182 - 8053.822: 31.4470% ( 599) 00:09:52.580 8053.822 - 8106.461: 34.0082% ( 377) 00:09:52.580 8106.461 - 8159.100: 37.6562% ( 537) 00:09:52.580 8159.100 - 8211.740: 41.9701% ( 635) 00:09:52.580 8211.740 - 8264.379: 45.1834% ( 473) 00:09:52.580 8264.379 - 8317.018: 48.3356% ( 464) 00:09:52.580 8317.018 - 8369.658: 51.3995% ( 451) 00:09:52.580 8369.658 - 8422.297: 54.7351% ( 491) 00:09:52.580 8422.297 - 8474.937: 57.8533% ( 459) 00:09:52.580 8474.937 - 8527.576: 61.7663% ( 576) 00:09:52.580 8527.576 - 8580.215: 65.2717% ( 516) 00:09:52.580 8580.215 - 8632.855: 68.0707% ( 412) 00:09:52.580 8632.855 - 8685.494: 70.9986% ( 431) 00:09:52.580 8685.494 - 8738.133: 73.3356% ( 344) 00:09:52.580 8738.133 - 8790.773: 74.8302% ( 220) 00:09:52.580 8790.773 - 8843.412: 76.9837% ( 317) 00:09:52.580 8843.412 - 8896.051: 78.8995% ( 282) 00:09:52.580 8896.051 - 8948.691: 80.7201% ( 268) 00:09:52.580 8948.691 - 9001.330: 82.5679% ( 272) 00:09:52.580 9001.330 - 9053.969: 84.6399% ( 305) 00:09:52.580 9053.969 - 9106.609: 86.3587% ( 253) 00:09:52.580 9106.609 - 9159.248: 87.7921% ( 211) 00:09:52.580 9159.248 - 9211.888: 89.2188% ( 210) 00:09:52.580 9211.888 - 9264.527: 90.3940% ( 173) 00:09:52.580 9264.527 - 9317.166: 91.4674% ( 158) 00:09:52.580 9317.166 - 9369.806: 92.2554% ( 116) 00:09:52.580 9369.806 - 9422.445: 92.8533% ( 88) 00:09:52.580 9422.445 - 9475.084: 93.3832% ( 78) 00:09:52.580 9475.084 - 9527.724: 93.8791% ( 73) 00:09:52.580 9527.724 - 9580.363: 94.2188% ( 50) 00:09:52.580 9580.363 - 9633.002: 94.6603% ( 65) 00:09:52.580 9633.002 - 9685.642: 95.0136% ( 52) 00:09:52.580 9685.642 - 9738.281: 95.3057% ( 43) 00:09:52.580 9738.281 - 9790.920: 95.6929% ( 57) 00:09:52.580 9790.920 - 9843.560: 96.0326% ( 50) 00:09:52.580 9843.560 - 9896.199: 96.5082% ( 70) 00:09:52.580 9896.199 - 9948.839: 96.7459% ( 35) 00:09:52.580 9948.839 - 10001.478: 96.9429% ( 29) 00:09:52.580 10001.478 - 10054.117: 97.2215% ( 41) 00:09:52.580 10054.117 - 10106.757: 97.3030% ( 12) 00:09:52.580 10106.757 - 10159.396: 97.3438% ( 6) 00:09:52.580 10159.396 - 10212.035: 97.3573% ( 2) 00:09:52.580 10212.035 - 10264.675: 97.3845% ( 4) 00:09:52.580 10264.675 - 10317.314: 97.4185% ( 5) 00:09:52.580 10317.314 - 10369.953: 97.4321% ( 2) 00:09:52.580 10369.953 - 10422.593: 97.4457% ( 2) 00:09:52.580 10422.593 - 10475.232: 97.4592% ( 2) 00:09:52.580 10475.232 - 10527.871: 97.5476% ( 13) 00:09:52.580 10527.871 - 10580.511: 97.6359% ( 13) 00:09:52.580 10580.511 - 10633.150: 97.7853% ( 22) 00:09:52.580 10633.150 - 10685.790: 97.8057% ( 3) 00:09:52.580 10685.790 - 10738.429: 97.8193% ( 2) 00:09:52.580 10843.708 - 10896.347: 97.8261% ( 1) 00:09:52.580 11212.183 - 11264.822: 97.8397% ( 2) 00:09:52.580 11264.822 - 11317.462: 97.8533% ( 2) 00:09:52.580 11317.462 - 11370.101: 97.8668% ( 2) 00:09:52.580 11370.101 - 11422.741: 97.8804% ( 2) 00:09:52.580 11422.741 - 11475.380: 97.9008% ( 3) 00:09:52.580 11475.380 - 11528.019: 97.9416% ( 6) 00:09:52.580 11528.019 - 11580.659: 97.9823% ( 6) 00:09:52.580 11580.659 - 11633.298: 98.0299% ( 7) 00:09:52.580 11633.298 - 11685.937: 98.0367% ( 1) 00:09:52.580 11685.937 - 11738.577: 98.1046% ( 10) 00:09:52.580 11738.577 - 11791.216: 98.1454% ( 6) 00:09:52.580 11791.216 - 11843.855: 98.1929% ( 7) 00:09:52.580 11843.855 - 11896.495: 98.2201% ( 4) 00:09:52.580 11896.495 - 11949.134: 98.2609% ( 6) 00:09:52.580 13370.397 - 13423.036: 98.3084% ( 7) 00:09:52.580 13423.036 - 13475.676: 98.3492% ( 6) 00:09:52.580 13475.676 - 13580.954: 98.4986% ( 22) 00:09:52.580 13580.954 - 13686.233: 98.6141% ( 17) 00:09:52.580 13686.233 - 13791.512: 98.6345% ( 3) 00:09:52.580 13791.512 - 13896.790: 98.6549% ( 3) 00:09:52.580 13896.790 - 14002.069: 98.6753% ( 3) 00:09:52.580 14002.069 - 14107.348: 98.6957% ( 3) 00:09:52.580 15160.135 - 15265.414: 98.7024% ( 1) 00:09:52.580 15265.414 - 15370.692: 98.7296% ( 4) 00:09:52.580 15370.692 - 15475.971: 98.7568% ( 4) 00:09:52.580 15475.971 - 15581.250: 99.0353% ( 41) 00:09:52.580 15581.250 - 15686.529: 99.1033% ( 10) 00:09:52.580 15686.529 - 15791.807: 99.1304% ( 4) 00:09:52.580 32636.402 - 32846.959: 99.1508% ( 3) 00:09:52.580 32846.959 - 33057.516: 99.1848% ( 5) 00:09:52.580 33057.516 - 33268.074: 99.2188% ( 5) 00:09:52.580 33268.074 - 33478.631: 99.2595% ( 6) 00:09:52.580 33478.631 - 33689.189: 99.3003% ( 6) 00:09:52.580 33689.189 - 33899.746: 99.3410% ( 6) 00:09:52.580 33899.746 - 34110.304: 99.3682% ( 4) 00:09:52.580 34110.304 - 34320.861: 99.4090% ( 6) 00:09:52.580 34320.861 - 34531.418: 99.4429% ( 5) 00:09:52.580 34531.418 - 34741.976: 99.4837% ( 6) 00:09:52.580 34741.976 - 34952.533: 99.5177% ( 5) 00:09:52.580 34952.533 - 35163.091: 99.5584% ( 6) 00:09:52.580 35163.091 - 35373.648: 99.5652% ( 1) 00:09:52.580 41479.814 - 41690.371: 99.5720% ( 1) 00:09:52.580 41690.371 - 41900.929: 99.5992% ( 4) 00:09:52.580 41900.929 - 42111.486: 99.6332% ( 5) 00:09:52.580 42111.486 - 42322.043: 99.6671% ( 5) 00:09:52.580 42322.043 - 42532.601: 99.6943% ( 4) 00:09:52.580 42532.601 - 42743.158: 99.7283% ( 5) 00:09:52.580 42743.158 - 42953.716: 99.7622% ( 5) 00:09:52.580 42953.716 - 43164.273: 99.7962% ( 5) 00:09:52.580 43164.273 - 43374.831: 99.8302% ( 5) 00:09:52.580 43374.831 - 43585.388: 99.8573% ( 4) 00:09:52.580 43585.388 - 43795.945: 99.8913% ( 5) 00:09:52.580 43795.945 - 44006.503: 99.9253% ( 5) 00:09:52.580 44006.503 - 44217.060: 99.9524% ( 4) 00:09:52.580 44217.060 - 44427.618: 99.9864% ( 5) 00:09:52.580 44427.618 - 44638.175: 100.0000% ( 2) 00:09:52.580 00:09:52.580 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:52.580 ============================================================================== 00:09:52.580 Range in us Cumulative IO count 00:09:52.580 6895.756 - 6948.395: 0.0136% ( 2) 00:09:52.580 6948.395 - 7001.035: 0.0204% ( 1) 00:09:52.580 7001.035 - 7053.674: 0.0543% ( 5) 00:09:52.580 7053.674 - 7106.313: 0.1698% ( 17) 00:09:52.580 7106.313 - 7158.953: 0.3533% ( 27) 00:09:52.580 7158.953 - 7211.592: 0.5707% ( 32) 00:09:52.580 7211.592 - 7264.231: 0.9375% ( 54) 00:09:52.580 7264.231 - 7316.871: 1.2772% ( 50) 00:09:52.580 7316.871 - 7369.510: 1.7731% ( 73) 00:09:52.580 7369.510 - 7422.149: 2.4457% ( 99) 00:09:52.580 7422.149 - 7474.789: 3.1658% ( 106) 00:09:52.580 7474.789 - 7527.428: 4.0965% ( 137) 00:09:52.580 7527.428 - 7580.067: 5.4484% ( 199) 00:09:52.580 7580.067 - 7632.707: 6.9565% ( 222) 00:09:52.580 7632.707 - 7685.346: 9.0897% ( 314) 00:09:52.580 7685.346 - 7737.986: 11.8478% ( 406) 00:09:52.580 7737.986 - 7790.625: 14.8098% ( 436) 00:09:52.580 7790.625 - 7843.264: 18.5802% ( 555) 00:09:52.580 7843.264 - 7895.904: 22.3234% ( 551) 00:09:52.580 7895.904 - 7948.543: 26.0122% ( 543) 00:09:52.580 7948.543 - 8001.182: 29.6128% ( 530) 00:09:52.581 8001.182 - 8053.822: 32.8465% ( 476) 00:09:52.581 8053.822 - 8106.461: 36.7188% ( 570) 00:09:52.581 8106.461 - 8159.100: 40.0815% ( 495) 00:09:52.581 8159.100 - 8211.740: 43.3016% ( 474) 00:09:52.581 8211.740 - 8264.379: 45.8764% ( 379) 00:09:52.581 8264.379 - 8317.018: 47.9008% ( 298) 00:09:52.581 8317.018 - 8369.658: 50.4348% ( 373) 00:09:52.581 8369.658 - 8422.297: 52.7446% ( 340) 00:09:52.581 8422.297 - 8474.937: 55.3125% ( 378) 00:09:52.581 8474.937 - 8527.576: 58.1726% ( 421) 00:09:52.581 8527.576 - 8580.215: 61.1005% ( 431) 00:09:52.581 8580.215 - 8632.855: 64.2935% ( 470) 00:09:52.581 8632.855 - 8685.494: 67.6223% ( 490) 00:09:52.581 8685.494 - 8738.133: 70.6522% ( 446) 00:09:52.581 8738.133 - 8790.773: 73.3424% ( 396) 00:09:52.581 8790.773 - 8843.412: 75.8288% ( 366) 00:09:52.581 8843.412 - 8896.051: 77.7106% ( 277) 00:09:52.581 8896.051 - 8948.691: 79.6467% ( 285) 00:09:52.581 8948.691 - 9001.330: 81.6848% ( 300) 00:09:52.581 9001.330 - 9053.969: 83.6209% ( 285) 00:09:52.581 9053.969 - 9106.609: 85.5163% ( 279) 00:09:52.581 9106.609 - 9159.248: 87.2418% ( 254) 00:09:52.581 9159.248 - 9211.888: 88.9266% ( 248) 00:09:52.581 9211.888 - 9264.527: 90.2378% ( 193) 00:09:52.581 9264.527 - 9317.166: 91.2432% ( 148) 00:09:52.581 9317.166 - 9369.806: 92.1332% ( 131) 00:09:52.581 9369.806 - 9422.445: 92.7785% ( 95) 00:09:52.581 9422.445 - 9475.084: 93.3832% ( 89) 00:09:52.581 9475.084 - 9527.724: 93.9946% ( 90) 00:09:52.581 9527.724 - 9580.363: 94.4361% ( 65) 00:09:52.581 9580.363 - 9633.002: 94.8030% ( 54) 00:09:52.581 9633.002 - 9685.642: 95.2853% ( 71) 00:09:52.581 9685.642 - 9738.281: 95.9171% ( 93) 00:09:52.581 9738.281 - 9790.920: 96.2772% ( 53) 00:09:52.581 9790.920 - 9843.560: 96.5693% ( 43) 00:09:52.581 9843.560 - 9896.199: 96.9905% ( 62) 00:09:52.581 9896.199 - 9948.839: 97.1467% ( 23) 00:09:52.581 9948.839 - 10001.478: 97.2622% ( 17) 00:09:52.581 10001.478 - 10054.117: 97.3438% ( 12) 00:09:52.581 10054.117 - 10106.757: 97.3709% ( 4) 00:09:52.581 10106.757 - 10159.396: 97.3913% ( 3) 00:09:52.581 10159.396 - 10212.035: 97.4117% ( 3) 00:09:52.581 10212.035 - 10264.675: 97.4321% ( 3) 00:09:52.581 10264.675 - 10317.314: 97.4457% ( 2) 00:09:52.581 10317.314 - 10369.953: 97.4660% ( 3) 00:09:52.581 10369.953 - 10422.593: 97.4932% ( 4) 00:09:52.581 10422.593 - 10475.232: 97.5611% ( 10) 00:09:52.581 10475.232 - 10527.871: 97.6766% ( 17) 00:09:52.581 10527.871 - 10580.511: 97.7378% ( 9) 00:09:52.581 10580.511 - 10633.150: 97.7582% ( 3) 00:09:52.581 10633.150 - 10685.790: 97.7853% ( 4) 00:09:52.581 10685.790 - 10738.429: 97.7989% ( 2) 00:09:52.581 10738.429 - 10791.068: 97.8125% ( 2) 00:09:52.581 10791.068 - 10843.708: 97.8261% ( 2) 00:09:52.581 11528.019 - 11580.659: 97.8329% ( 1) 00:09:52.581 11633.298 - 11685.937: 97.8465% ( 2) 00:09:52.581 11685.937 - 11738.577: 97.8668% ( 3) 00:09:52.581 11738.577 - 11791.216: 97.8736% ( 1) 00:09:52.581 11791.216 - 11843.855: 97.8872% ( 2) 00:09:52.581 11843.855 - 11896.495: 97.9076% ( 3) 00:09:52.581 11896.495 - 11949.134: 97.9212% ( 2) 00:09:52.581 11949.134 - 12001.773: 97.9484% ( 4) 00:09:52.581 12001.773 - 12054.413: 97.9891% ( 6) 00:09:52.581 12054.413 - 12107.052: 98.0367% ( 7) 00:09:52.581 12107.052 - 12159.692: 98.1182% ( 12) 00:09:52.581 12159.692 - 12212.331: 98.1793% ( 9) 00:09:52.581 12212.331 - 12264.970: 98.1997% ( 3) 00:09:52.581 12264.970 - 12317.610: 98.2201% ( 3) 00:09:52.581 12317.610 - 12370.249: 98.2269% ( 1) 00:09:52.581 12370.249 - 12422.888: 98.2609% ( 5) 00:09:52.581 12422.888 - 12475.528: 98.3356% ( 11) 00:09:52.581 12475.528 - 12528.167: 98.4239% ( 13) 00:09:52.581 12528.167 - 12580.806: 98.5598% ( 20) 00:09:52.581 12580.806 - 12633.446: 98.6073% ( 7) 00:09:52.581 12633.446 - 12686.085: 98.6277% ( 3) 00:09:52.581 12686.085 - 12738.724: 98.6345% ( 1) 00:09:52.581 12738.724 - 12791.364: 98.6413% ( 1) 00:09:52.581 12791.364 - 12844.003: 98.6549% ( 2) 00:09:52.581 12844.003 - 12896.643: 98.6617% ( 1) 00:09:52.581 12896.643 - 12949.282: 98.6753% ( 2) 00:09:52.581 12949.282 - 13001.921: 98.6821% ( 1) 00:09:52.581 13001.921 - 13054.561: 98.6889% ( 1) 00:09:52.581 13054.561 - 13107.200: 98.6957% ( 1) 00:09:52.581 14633.741 - 14739.020: 98.7024% ( 1) 00:09:52.581 14739.020 - 14844.299: 98.7092% ( 1) 00:09:52.581 14949.578 - 15054.856: 98.7636% ( 8) 00:09:52.581 15054.856 - 15160.135: 98.8315% ( 10) 00:09:52.581 15160.135 - 15265.414: 99.0353% ( 30) 00:09:52.581 15265.414 - 15370.692: 99.0829% ( 7) 00:09:52.581 15370.692 - 15475.971: 99.1033% ( 3) 00:09:52.581 15475.971 - 15581.250: 99.1236% ( 3) 00:09:52.581 15581.250 - 15686.529: 99.1304% ( 1) 00:09:52.581 30741.385 - 30951.942: 99.1372% ( 1) 00:09:52.581 30951.942 - 31162.500: 99.1712% ( 5) 00:09:52.581 31162.500 - 31373.057: 99.2052% ( 5) 00:09:52.581 31373.057 - 31583.614: 99.2459% ( 6) 00:09:52.581 31583.614 - 31794.172: 99.2799% ( 5) 00:09:52.581 31794.172 - 32004.729: 99.3207% ( 6) 00:09:52.581 32004.729 - 32215.287: 99.3546% ( 5) 00:09:52.581 32215.287 - 32425.844: 99.3954% ( 6) 00:09:52.581 32425.844 - 32636.402: 99.4293% ( 5) 00:09:52.581 32636.402 - 32846.959: 99.4633% ( 5) 00:09:52.581 32846.959 - 33057.516: 99.5041% ( 6) 00:09:52.581 33057.516 - 33268.074: 99.5448% ( 6) 00:09:52.581 33268.074 - 33478.631: 99.5652% ( 3) 00:09:52.581 40427.027 - 40637.584: 99.5992% ( 5) 00:09:52.581 40637.584 - 40848.141: 99.6332% ( 5) 00:09:52.581 40848.141 - 41058.699: 99.6535% ( 3) 00:09:52.581 41058.699 - 41269.256: 99.6807% ( 4) 00:09:52.581 41269.256 - 41479.814: 99.7147% ( 5) 00:09:52.581 41479.814 - 41690.371: 99.7486% ( 5) 00:09:52.581 41690.371 - 41900.929: 99.7758% ( 4) 00:09:52.581 41900.929 - 42111.486: 99.8098% ( 5) 00:09:52.581 42111.486 - 42322.043: 99.8505% ( 6) 00:09:52.581 42322.043 - 42532.601: 99.8845% ( 5) 00:09:52.581 42532.601 - 42743.158: 99.9185% ( 5) 00:09:52.581 42743.158 - 42953.716: 99.9524% ( 5) 00:09:52.581 42953.716 - 43164.273: 99.9864% ( 5) 00:09:52.581 43164.273 - 43374.831: 100.0000% ( 2) 00:09:52.581 00:09:52.581 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:52.581 ============================================================================== 00:09:52.581 Range in us Cumulative IO count 00:09:52.581 7158.953 - 7211.592: 0.0543% ( 8) 00:09:52.581 7211.592 - 7264.231: 0.1630% ( 16) 00:09:52.581 7264.231 - 7316.871: 0.3261% ( 24) 00:09:52.581 7316.871 - 7369.510: 0.7133% ( 57) 00:09:52.581 7369.510 - 7422.149: 1.2636% ( 81) 00:09:52.581 7422.149 - 7474.789: 2.0245% ( 112) 00:09:52.581 7474.789 - 7527.428: 3.2948% ( 187) 00:09:52.581 7527.428 - 7580.067: 5.1155% ( 268) 00:09:52.581 7580.067 - 7632.707: 7.2758% ( 318) 00:09:52.581 7632.707 - 7685.346: 9.9185% ( 389) 00:09:52.581 7685.346 - 7737.986: 12.8940% ( 438) 00:09:52.581 7737.986 - 7790.625: 15.9647% ( 452) 00:09:52.581 7790.625 - 7843.264: 19.1168% ( 464) 00:09:52.581 7843.264 - 7895.904: 22.1943% ( 453) 00:09:52.581 7895.904 - 7948.543: 25.3736% ( 468) 00:09:52.581 7948.543 - 8001.182: 28.6345% ( 480) 00:09:52.581 8001.182 - 8053.822: 31.4334% ( 412) 00:09:52.581 8053.822 - 8106.461: 34.8234% ( 499) 00:09:52.581 8106.461 - 8159.100: 37.5883% ( 407) 00:09:52.581 8159.100 - 8211.740: 40.9307% ( 492) 00:09:52.581 8211.740 - 8264.379: 44.1101% ( 468) 00:09:52.581 8264.379 - 8317.018: 47.1943% ( 454) 00:09:52.581 8317.018 - 8369.658: 50.2649% ( 452) 00:09:52.581 8369.658 - 8422.297: 53.2677% ( 442) 00:09:52.581 8422.297 - 8474.937: 56.6168% ( 493) 00:09:52.581 8474.937 - 8527.576: 60.1223% ( 516) 00:09:52.581 8527.576 - 8580.215: 63.0367% ( 429) 00:09:52.581 8580.215 - 8632.855: 66.4198% ( 498) 00:09:52.581 8632.855 - 8685.494: 69.4361% ( 444) 00:09:52.581 8685.494 - 8738.133: 71.9769% ( 374) 00:09:52.581 8738.133 - 8790.773: 74.5584% ( 380) 00:09:52.581 8790.773 - 8843.412: 76.9701% ( 355) 00:09:52.581 8843.412 - 8896.051: 78.9742% ( 295) 00:09:52.581 8896.051 - 8948.691: 81.3723% ( 353) 00:09:52.581 8948.691 - 9001.330: 83.0639% ( 249) 00:09:52.581 9001.330 - 9053.969: 84.8438% ( 262) 00:09:52.581 9053.969 - 9106.609: 86.3451% ( 221) 00:09:52.581 9106.609 - 9159.248: 87.9688% ( 239) 00:09:52.581 9159.248 - 9211.888: 89.4633% ( 220) 00:09:52.581 9211.888 - 9264.527: 90.1495% ( 101) 00:09:52.581 9264.527 - 9317.166: 90.9783% ( 122) 00:09:52.581 9317.166 - 9369.806: 91.6236% ( 95) 00:09:52.581 9369.806 - 9422.445: 92.3438% ( 106) 00:09:52.581 9422.445 - 9475.084: 92.9348% ( 87) 00:09:52.581 9475.084 - 9527.724: 93.2812% ( 51) 00:09:52.581 9527.724 - 9580.363: 93.8315% ( 81) 00:09:52.581 9580.363 - 9633.002: 94.5992% ( 113) 00:09:52.581 9633.002 - 9685.642: 94.9796% ( 56) 00:09:52.581 9685.642 - 9738.281: 95.4144% ( 64) 00:09:52.581 9738.281 - 9790.920: 95.8424% ( 63) 00:09:52.581 9790.920 - 9843.560: 96.2568% ( 61) 00:09:52.581 9843.560 - 9896.199: 96.5285% ( 40) 00:09:52.581 9896.199 - 9948.839: 97.0109% ( 71) 00:09:52.581 9948.839 - 10001.478: 97.1399% ( 19) 00:09:52.581 10001.478 - 10054.117: 97.2622% ( 18) 00:09:52.581 10054.117 - 10106.757: 97.3302% ( 10) 00:09:52.581 10106.757 - 10159.396: 97.3981% ( 10) 00:09:52.581 10159.396 - 10212.035: 97.4253% ( 4) 00:09:52.581 10212.035 - 10264.675: 97.4389% ( 2) 00:09:52.581 10264.675 - 10317.314: 97.4728% ( 5) 00:09:52.581 10317.314 - 10369.953: 97.5136% ( 6) 00:09:52.581 10369.953 - 10422.593: 97.6087% ( 14) 00:09:52.581 10422.593 - 10475.232: 97.6834% ( 11) 00:09:52.581 10475.232 - 10527.871: 97.7174% ( 5) 00:09:52.582 10527.871 - 10580.511: 97.7446% ( 4) 00:09:52.582 10580.511 - 10633.150: 97.7649% ( 3) 00:09:52.582 10633.150 - 10685.790: 97.7853% ( 3) 00:09:52.582 10685.790 - 10738.429: 97.7989% ( 2) 00:09:52.582 10738.429 - 10791.068: 97.8125% ( 2) 00:09:52.582 10791.068 - 10843.708: 97.8261% ( 2) 00:09:52.582 11843.855 - 11896.495: 97.8397% ( 2) 00:09:52.582 11896.495 - 11949.134: 97.8533% ( 2) 00:09:52.582 11949.134 - 12001.773: 97.8736% ( 3) 00:09:52.582 12001.773 - 12054.413: 97.9008% ( 4) 00:09:52.582 12054.413 - 12107.052: 98.0095% ( 16) 00:09:52.582 12107.052 - 12159.692: 98.0978% ( 13) 00:09:52.582 12159.692 - 12212.331: 98.2405% ( 21) 00:09:52.582 12212.331 - 12264.970: 98.3696% ( 19) 00:09:52.582 12264.970 - 12317.610: 98.4511% ( 12) 00:09:52.582 12317.610 - 12370.249: 98.5054% ( 8) 00:09:52.582 12370.249 - 12422.888: 98.5326% ( 4) 00:09:52.582 12422.888 - 12475.528: 98.5666% ( 5) 00:09:52.582 12475.528 - 12528.167: 98.5802% ( 2) 00:09:52.582 12528.167 - 12580.806: 98.6073% ( 4) 00:09:52.582 12580.806 - 12633.446: 98.6345% ( 4) 00:09:52.582 12633.446 - 12686.085: 98.6549% ( 3) 00:09:52.582 12686.085 - 12738.724: 98.6685% ( 2) 00:09:52.582 12738.724 - 12791.364: 98.6821% ( 2) 00:09:52.582 12791.364 - 12844.003: 98.6957% ( 2) 00:09:52.582 14844.299 - 14949.578: 98.7228% ( 4) 00:09:52.582 14949.578 - 15054.856: 98.7636% ( 6) 00:09:52.582 15054.856 - 15160.135: 99.0421% ( 41) 00:09:52.582 15160.135 - 15265.414: 99.0761% ( 5) 00:09:52.582 15265.414 - 15370.692: 99.0965% ( 3) 00:09:52.582 15370.692 - 15475.971: 99.1236% ( 4) 00:09:52.582 15475.971 - 15581.250: 99.1304% ( 1) 00:09:52.582 28635.810 - 28846.368: 99.1440% ( 2) 00:09:52.582 28846.368 - 29056.925: 99.1780% ( 5) 00:09:52.582 29056.925 - 29267.483: 99.2188% ( 6) 00:09:52.582 29267.483 - 29478.040: 99.2527% ( 5) 00:09:52.582 29478.040 - 29688.598: 99.2867% ( 5) 00:09:52.582 29688.598 - 29899.155: 99.3274% ( 6) 00:09:52.582 29899.155 - 30109.712: 99.3614% ( 5) 00:09:52.582 30109.712 - 30320.270: 99.4022% ( 6) 00:09:52.582 30320.270 - 30530.827: 99.4361% ( 5) 00:09:52.582 30530.827 - 30741.385: 99.4769% ( 6) 00:09:52.582 30741.385 - 30951.942: 99.5109% ( 5) 00:09:52.582 30951.942 - 31162.500: 99.5516% ( 6) 00:09:52.582 31162.500 - 31373.057: 99.5652% ( 2) 00:09:52.582 38532.010 - 38742.567: 99.5720% ( 1) 00:09:52.582 38742.567 - 38953.124: 99.6060% ( 5) 00:09:52.582 38953.124 - 39163.682: 99.6399% ( 5) 00:09:52.582 39163.682 - 39374.239: 99.6739% ( 5) 00:09:52.582 39374.239 - 39584.797: 99.7079% ( 5) 00:09:52.582 39584.797 - 39795.354: 99.7351% ( 4) 00:09:52.582 39795.354 - 40005.912: 99.7690% ( 5) 00:09:52.582 40005.912 - 40216.469: 99.8030% ( 5) 00:09:52.582 40216.469 - 40427.027: 99.8370% ( 5) 00:09:52.582 40427.027 - 40637.584: 99.8709% ( 5) 00:09:52.582 40637.584 - 40848.141: 99.8981% ( 4) 00:09:52.582 40848.141 - 41058.699: 99.9321% ( 5) 00:09:52.582 41058.699 - 41269.256: 99.9728% ( 6) 00:09:52.582 41269.256 - 41479.814: 100.0000% ( 4) 00:09:52.582 00:09:52.582 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:52.582 ============================================================================== 00:09:52.582 Range in us Cumulative IO count 00:09:52.582 6895.756 - 6948.395: 0.0068% ( 1) 00:09:52.582 7106.313 - 7158.953: 0.0272% ( 3) 00:09:52.582 7158.953 - 7211.592: 0.0883% ( 9) 00:09:52.582 7211.592 - 7264.231: 0.2174% ( 19) 00:09:52.582 7264.231 - 7316.871: 0.4280% ( 31) 00:09:52.582 7316.871 - 7369.510: 0.9443% ( 76) 00:09:52.582 7369.510 - 7422.149: 1.4674% ( 77) 00:09:52.582 7422.149 - 7474.789: 2.3777% ( 134) 00:09:52.582 7474.789 - 7527.428: 3.6889% ( 193) 00:09:52.582 7527.428 - 7580.067: 5.3397% ( 243) 00:09:52.582 7580.067 - 7632.707: 7.3641% ( 298) 00:09:52.582 7632.707 - 7685.346: 9.5516% ( 322) 00:09:52.582 7685.346 - 7737.986: 12.0380% ( 366) 00:09:52.582 7737.986 - 7790.625: 14.8981% ( 421) 00:09:52.582 7790.625 - 7843.264: 17.5883% ( 396) 00:09:52.582 7843.264 - 7895.904: 20.9171% ( 490) 00:09:52.582 7895.904 - 7948.543: 23.9674% ( 449) 00:09:52.582 7948.543 - 8001.182: 27.2690% ( 486) 00:09:52.582 8001.182 - 8053.822: 30.6726% ( 501) 00:09:52.582 8053.822 - 8106.461: 34.0014% ( 490) 00:09:52.582 8106.461 - 8159.100: 37.3777% ( 497) 00:09:52.582 8159.100 - 8211.740: 40.9443% ( 525) 00:09:52.582 8211.740 - 8264.379: 44.7147% ( 555) 00:09:52.582 8264.379 - 8317.018: 48.1726% ( 509) 00:09:52.582 8317.018 - 8369.658: 51.3995% ( 475) 00:09:52.582 8369.658 - 8422.297: 54.7826% ( 498) 00:09:52.582 8422.297 - 8474.937: 58.3424% ( 524) 00:09:52.582 8474.937 - 8527.576: 61.4810% ( 462) 00:09:52.582 8527.576 - 8580.215: 64.7418% ( 480) 00:09:52.582 8580.215 - 8632.855: 67.5068% ( 407) 00:09:52.582 8632.855 - 8685.494: 70.2310% ( 401) 00:09:52.582 8685.494 - 8738.133: 72.6223% ( 352) 00:09:52.582 8738.133 - 8790.773: 74.8234% ( 324) 00:09:52.582 8790.773 - 8843.412: 77.1535% ( 343) 00:09:52.582 8843.412 - 8896.051: 79.3478% ( 323) 00:09:52.582 8896.051 - 8948.691: 81.5625% ( 326) 00:09:52.582 8948.691 - 9001.330: 83.9402% ( 350) 00:09:52.582 9001.330 - 9053.969: 85.9171% ( 291) 00:09:52.582 9053.969 - 9106.609: 87.0041% ( 160) 00:09:52.582 9106.609 - 9159.248: 88.1861% ( 174) 00:09:52.582 9159.248 - 9211.888: 89.2527% ( 157) 00:09:52.582 9211.888 - 9264.527: 89.9660% ( 105) 00:09:52.582 9264.527 - 9317.166: 90.7880% ( 121) 00:09:52.582 9317.166 - 9369.806: 91.2840% ( 73) 00:09:52.582 9369.806 - 9422.445: 91.9293% ( 95) 00:09:52.582 9422.445 - 9475.084: 92.5611% ( 93) 00:09:52.582 9475.084 - 9527.724: 92.9620% ( 59) 00:09:52.582 9527.724 - 9580.363: 93.4375% ( 70) 00:09:52.582 9580.363 - 9633.002: 93.8451% ( 60) 00:09:52.582 9633.002 - 9685.642: 94.1304% ( 42) 00:09:52.582 9685.642 - 9738.281: 94.5041% ( 55) 00:09:52.582 9738.281 - 9790.920: 95.0136% ( 75) 00:09:52.582 9790.920 - 9843.560: 95.4959% ( 71) 00:09:52.582 9843.560 - 9896.199: 96.0666% ( 84) 00:09:52.582 9896.199 - 9948.839: 96.5761% ( 75) 00:09:52.582 9948.839 - 10001.478: 96.7935% ( 32) 00:09:52.582 10001.478 - 10054.117: 96.9429% ( 22) 00:09:52.582 10054.117 - 10106.757: 97.0652% ( 18) 00:09:52.582 10106.757 - 10159.396: 97.1943% ( 19) 00:09:52.582 10159.396 - 10212.035: 97.4117% ( 32) 00:09:52.582 10212.035 - 10264.675: 97.5679% ( 23) 00:09:52.582 10264.675 - 10317.314: 97.6427% ( 11) 00:09:52.582 10317.314 - 10369.953: 97.6834% ( 6) 00:09:52.582 10369.953 - 10422.593: 97.7174% ( 5) 00:09:52.582 10422.593 - 10475.232: 97.7514% ( 5) 00:09:52.582 10475.232 - 10527.871: 97.7785% ( 4) 00:09:52.582 10527.871 - 10580.511: 97.7921% ( 2) 00:09:52.582 10580.511 - 10633.150: 97.8125% ( 3) 00:09:52.582 10633.150 - 10685.790: 97.8261% ( 2) 00:09:52.582 11580.659 - 11633.298: 97.8397% ( 2) 00:09:52.582 11633.298 - 11685.937: 97.8804% ( 6) 00:09:52.582 11685.937 - 11738.577: 97.9620% ( 12) 00:09:52.583 11738.577 - 11791.216: 98.0639% ( 15) 00:09:52.583 11791.216 - 11843.855: 98.0978% ( 5) 00:09:52.583 11843.855 - 11896.495: 98.1726% ( 11) 00:09:52.583 11896.495 - 11949.134: 98.1861% ( 2) 00:09:52.583 11949.134 - 12001.773: 98.1997% ( 2) 00:09:52.583 12001.773 - 12054.413: 98.2065% ( 1) 00:09:52.583 12054.413 - 12107.052: 98.2201% ( 2) 00:09:52.583 12107.052 - 12159.692: 98.2337% ( 2) 00:09:52.583 12159.692 - 12212.331: 98.2473% ( 2) 00:09:52.583 12212.331 - 12264.970: 98.2609% ( 2) 00:09:52.583 12264.970 - 12317.610: 98.2745% ( 2) 00:09:52.583 12317.610 - 12370.249: 98.2880% ( 2) 00:09:52.583 12370.249 - 12422.888: 98.3084% ( 3) 00:09:52.583 12422.888 - 12475.528: 98.3152% ( 1) 00:09:52.583 12475.528 - 12528.167: 98.3288% ( 2) 00:09:52.583 12528.167 - 12580.806: 98.3560% ( 4) 00:09:52.583 12580.806 - 12633.446: 98.3696% ( 2) 00:09:52.583 12633.446 - 12686.085: 98.4307% ( 9) 00:09:52.583 12686.085 - 12738.724: 98.5122% ( 12) 00:09:52.583 12738.724 - 12791.364: 98.5870% ( 11) 00:09:52.583 12791.364 - 12844.003: 98.6073% ( 3) 00:09:52.583 12844.003 - 12896.643: 98.6209% ( 2) 00:09:52.583 12896.643 - 12949.282: 98.6345% ( 2) 00:09:52.583 12949.282 - 13001.921: 98.6549% ( 3) 00:09:52.583 13001.921 - 13054.561: 98.6685% ( 2) 00:09:52.583 13054.561 - 13107.200: 98.6889% ( 3) 00:09:52.583 13107.200 - 13159.839: 98.6957% ( 1) 00:09:52.583 14633.741 - 14739.020: 98.7160% ( 3) 00:09:52.583 14739.020 - 14844.299: 98.8315% ( 17) 00:09:52.583 14844.299 - 14949.578: 98.9742% ( 21) 00:09:52.583 14949.578 - 15054.856: 99.0557% ( 12) 00:09:52.583 15054.856 - 15160.135: 99.0761% ( 3) 00:09:52.583 15160.135 - 15265.414: 99.1033% ( 4) 00:09:52.583 15265.414 - 15370.692: 99.1236% ( 3) 00:09:52.583 15370.692 - 15475.971: 99.1304% ( 1) 00:09:52.583 26740.794 - 26846.072: 99.1508% ( 3) 00:09:52.583 26846.072 - 26951.351: 99.1644% ( 2) 00:09:52.583 26951.351 - 27161.908: 99.1984% ( 5) 00:09:52.583 27161.908 - 27372.466: 99.2391% ( 6) 00:09:52.583 27372.466 - 27583.023: 99.2731% ( 5) 00:09:52.583 27583.023 - 27793.581: 99.3139% ( 6) 00:09:52.583 27793.581 - 28004.138: 99.3546% ( 6) 00:09:52.583 28004.138 - 28214.696: 99.3886% ( 5) 00:09:52.583 28214.696 - 28425.253: 99.4226% ( 5) 00:09:52.583 28425.253 - 28635.810: 99.4565% ( 5) 00:09:52.583 28635.810 - 28846.368: 99.4905% ( 5) 00:09:52.583 28846.368 - 29056.925: 99.5245% ( 5) 00:09:52.583 29056.925 - 29267.483: 99.5584% ( 5) 00:09:52.583 29267.483 - 29478.040: 99.5652% ( 1) 00:09:52.583 36847.550 - 37058.108: 99.5992% ( 5) 00:09:52.583 37058.108 - 37268.665: 99.6332% ( 5) 00:09:52.583 37268.665 - 37479.222: 99.6671% ( 5) 00:09:52.583 37479.222 - 37689.780: 99.6943% ( 4) 00:09:52.583 37689.780 - 37900.337: 99.7283% ( 5) 00:09:52.583 37900.337 - 38110.895: 99.7622% ( 5) 00:09:52.583 38110.895 - 38321.452: 99.7962% ( 5) 00:09:52.583 38321.452 - 38532.010: 99.8234% ( 4) 00:09:52.583 38532.010 - 38742.567: 99.8573% ( 5) 00:09:52.583 38742.567 - 38953.124: 99.8913% ( 5) 00:09:52.583 38953.124 - 39163.682: 99.9321% ( 6) 00:09:52.583 39163.682 - 39374.239: 99.9592% ( 4) 00:09:52.583 39374.239 - 39584.797: 99.9932% ( 5) 00:09:52.583 39584.797 - 39795.354: 100.0000% ( 1) 00:09:52.583 00:09:52.583 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:52.583 ============================================================================== 00:09:52.583 Range in us Cumulative IO count 00:09:52.583 6895.756 - 6948.395: 0.0068% ( 1) 00:09:52.583 7001.035 - 7053.674: 0.0203% ( 2) 00:09:52.583 7053.674 - 7106.313: 0.0609% ( 6) 00:09:52.583 7106.313 - 7158.953: 0.1150% ( 8) 00:09:52.583 7158.953 - 7211.592: 0.1759% ( 9) 00:09:52.583 7211.592 - 7264.231: 0.3788% ( 30) 00:09:52.583 7264.231 - 7316.871: 0.5749% ( 29) 00:09:52.583 7316.871 - 7369.510: 1.0011% ( 63) 00:09:52.583 7369.510 - 7422.149: 1.8398% ( 124) 00:09:52.583 7422.149 - 7474.789: 2.5771% ( 109) 00:09:52.583 7474.789 - 7527.428: 3.6323% ( 156) 00:09:52.583 7527.428 - 7580.067: 4.8498% ( 180) 00:09:52.583 7580.067 - 7632.707: 6.3650% ( 224) 00:09:52.583 7632.707 - 7685.346: 8.5904% ( 329) 00:09:52.583 7685.346 - 7737.986: 11.3975% ( 415) 00:09:52.583 7737.986 - 7790.625: 14.5089% ( 460) 00:09:52.583 7790.625 - 7843.264: 17.5392% ( 448) 00:09:52.583 7843.264 - 7895.904: 20.4478% ( 430) 00:09:52.583 7895.904 - 7948.543: 23.2955% ( 421) 00:09:52.583 7948.543 - 8001.182: 26.8466% ( 525) 00:09:52.583 8001.182 - 8053.822: 30.0122% ( 468) 00:09:52.583 8053.822 - 8106.461: 33.3266% ( 490) 00:09:52.583 8106.461 - 8159.100: 36.9656% ( 538) 00:09:52.583 8159.100 - 8211.740: 40.5438% ( 529) 00:09:52.583 8211.740 - 8264.379: 44.0476% ( 518) 00:09:52.583 8264.379 - 8317.018: 48.0317% ( 589) 00:09:52.583 8317.018 - 8369.658: 51.7384% ( 548) 00:09:52.583 8369.658 - 8422.297: 55.5262% ( 560) 00:09:52.583 8422.297 - 8474.937: 59.0097% ( 515) 00:09:52.583 8474.937 - 8527.576: 62.1550% ( 465) 00:09:52.583 8527.576 - 8580.215: 65.2056% ( 451) 00:09:52.583 8580.215 - 8632.855: 68.2495% ( 450) 00:09:52.583 8632.855 - 8685.494: 70.8739% ( 388) 00:09:52.583 8685.494 - 8738.133: 73.4916% ( 387) 00:09:52.583 8738.133 - 8790.773: 75.7373% ( 332) 00:09:52.583 8790.773 - 8843.412: 77.8815% ( 317) 00:09:52.583 8843.412 - 8896.051: 79.8431% ( 290) 00:09:52.583 8896.051 - 8948.691: 81.7100% ( 276) 00:09:52.583 8948.691 - 9001.330: 83.4957% ( 264) 00:09:52.583 9001.330 - 9053.969: 85.0920% ( 236) 00:09:52.583 9053.969 - 9106.609: 86.3028% ( 179) 00:09:52.583 9106.609 - 9159.248: 87.4459% ( 169) 00:09:52.583 9159.248 - 9211.888: 88.4537% ( 149) 00:09:52.583 9211.888 - 9264.527: 89.2992% ( 125) 00:09:52.583 9264.527 - 9317.166: 90.2327% ( 138) 00:09:52.583 9317.166 - 9369.806: 90.9632% ( 108) 00:09:52.583 9369.806 - 9422.445: 91.5179% ( 82) 00:09:52.583 9422.445 - 9475.084: 92.1604% ( 95) 00:09:52.583 9475.084 - 9527.724: 92.8571% ( 103) 00:09:52.583 9527.724 - 9580.363: 93.1818% ( 48) 00:09:52.583 9580.363 - 9633.002: 93.5133% ( 49) 00:09:52.583 9633.002 - 9685.642: 93.9529% ( 65) 00:09:52.583 9685.642 - 9738.281: 94.2302% ( 41) 00:09:52.583 9738.281 - 9790.920: 94.5549% ( 48) 00:09:52.583 9790.920 - 9843.560: 94.8390% ( 42) 00:09:52.584 9843.560 - 9896.199: 95.2449% ( 60) 00:09:52.584 9896.199 - 9948.839: 95.7792% ( 79) 00:09:52.584 9948.839 - 10001.478: 96.0363% ( 38) 00:09:52.584 10001.478 - 10054.117: 96.4624% ( 63) 00:09:52.584 10054.117 - 10106.757: 96.6856% ( 33) 00:09:52.584 10106.757 - 10159.396: 96.8547% ( 25) 00:09:52.584 10159.396 - 10212.035: 96.9156% ( 9) 00:09:52.584 10212.035 - 10264.675: 96.9426% ( 4) 00:09:52.584 10264.675 - 10317.314: 96.9832% ( 6) 00:09:52.584 10317.314 - 10369.953: 97.0170% ( 5) 00:09:52.584 10369.953 - 10422.593: 97.0509% ( 5) 00:09:52.584 10422.593 - 10475.232: 97.0644% ( 2) 00:09:52.584 10475.232 - 10527.871: 97.0847% ( 3) 00:09:52.584 10527.871 - 10580.511: 97.0982% ( 2) 00:09:52.584 10580.511 - 10633.150: 97.1388% ( 6) 00:09:52.584 10633.150 - 10685.790: 97.1861% ( 7) 00:09:52.584 10685.790 - 10738.429: 97.2673% ( 12) 00:09:52.584 10738.429 - 10791.068: 97.3214% ( 8) 00:09:52.584 10791.068 - 10843.708: 97.3350% ( 2) 00:09:52.584 10843.708 - 10896.347: 97.3552% ( 3) 00:09:52.584 10896.347 - 10948.986: 97.3823% ( 4) 00:09:52.584 10948.986 - 11001.626: 97.4094% ( 4) 00:09:52.584 11001.626 - 11054.265: 97.4499% ( 6) 00:09:52.584 11054.265 - 11106.904: 97.5244% ( 11) 00:09:52.584 11106.904 - 11159.544: 97.5649% ( 6) 00:09:52.584 11159.544 - 11212.183: 97.6055% ( 6) 00:09:52.584 11212.183 - 11264.822: 97.6596% ( 8) 00:09:52.584 11264.822 - 11317.462: 97.7205% ( 9) 00:09:52.584 11317.462 - 11370.101: 97.7408% ( 3) 00:09:52.584 11370.101 - 11422.741: 97.7476% ( 1) 00:09:52.584 11422.741 - 11475.380: 97.7543% ( 1) 00:09:52.584 11475.380 - 11528.019: 97.7679% ( 2) 00:09:52.584 11528.019 - 11580.659: 97.7814% ( 2) 00:09:52.584 11580.659 - 11633.298: 97.7949% ( 2) 00:09:52.584 11633.298 - 11685.937: 97.8084% ( 2) 00:09:52.584 11685.937 - 11738.577: 97.8220% ( 2) 00:09:52.584 11738.577 - 11791.216: 97.8287% ( 1) 00:09:52.584 11791.216 - 11843.855: 97.8355% ( 1) 00:09:52.584 12791.364 - 12844.003: 97.8558% ( 3) 00:09:52.584 12844.003 - 12896.643: 97.8693% ( 2) 00:09:52.584 12896.643 - 12949.282: 97.8828% ( 2) 00:09:52.584 12949.282 - 13001.921: 97.8964% ( 2) 00:09:52.584 13001.921 - 13054.561: 97.9099% ( 2) 00:09:52.584 13054.561 - 13107.200: 97.9234% ( 2) 00:09:52.584 13107.200 - 13159.839: 97.9437% ( 3) 00:09:52.584 13159.839 - 13212.479: 97.9775% ( 5) 00:09:52.584 13212.479 - 13265.118: 98.0114% ( 5) 00:09:52.584 13265.118 - 13317.757: 98.0587% ( 7) 00:09:52.584 13317.757 - 13370.397: 98.1061% ( 7) 00:09:52.584 13370.397 - 13423.036: 98.1399% ( 5) 00:09:52.584 13423.036 - 13475.676: 98.1737% ( 5) 00:09:52.584 13475.676 - 13580.954: 98.2616% ( 13) 00:09:52.584 13580.954 - 13686.233: 98.2684% ( 1) 00:09:52.584 14002.069 - 14107.348: 98.2819% ( 2) 00:09:52.584 14107.348 - 14212.627: 98.3902% ( 16) 00:09:52.584 14212.627 - 14317.905: 98.4916% ( 15) 00:09:52.584 14317.905 - 14423.184: 98.5931% ( 15) 00:09:52.584 14423.184 - 14528.463: 98.6134% ( 3) 00:09:52.584 14528.463 - 14633.741: 98.6337% ( 3) 00:09:52.584 14633.741 - 14739.020: 98.6540% ( 3) 00:09:52.584 14739.020 - 14844.299: 98.6742% ( 3) 00:09:52.584 14844.299 - 14949.578: 98.6945% ( 3) 00:09:52.584 14949.578 - 15054.856: 98.7013% ( 1) 00:09:52.584 15160.135 - 15265.414: 98.7081% ( 1) 00:09:52.584 15265.414 - 15370.692: 98.7419% ( 5) 00:09:52.584 15370.692 - 15475.971: 98.7825% ( 6) 00:09:52.584 15475.971 - 15581.250: 98.8095% ( 4) 00:09:52.584 15581.250 - 15686.529: 98.8298% ( 3) 00:09:52.584 15686.529 - 15791.807: 98.8704% ( 6) 00:09:52.584 15791.807 - 15897.086: 98.9042% ( 5) 00:09:52.584 15897.086 - 16002.365: 98.9448% ( 6) 00:09:52.584 16002.365 - 16107.643: 99.0327% ( 13) 00:09:52.584 16107.643 - 16212.922: 99.1004% ( 10) 00:09:52.584 16212.922 - 16318.201: 99.1342% ( 5) 00:09:52.584 18107.939 - 18213.218: 99.1477% ( 2) 00:09:52.584 18213.218 - 18318.496: 99.1748% ( 4) 00:09:52.584 18318.496 - 18423.775: 99.1951% ( 3) 00:09:52.584 18423.775 - 18529.054: 99.2154% ( 3) 00:09:52.584 18529.054 - 18634.333: 99.2357% ( 3) 00:09:52.584 18634.333 - 18739.611: 99.2627% ( 4) 00:09:52.584 18739.611 - 18844.890: 99.2830% ( 3) 00:09:52.584 18844.890 - 18950.169: 99.3101% ( 4) 00:09:52.584 18950.169 - 19055.447: 99.3304% ( 3) 00:09:52.584 19055.447 - 19160.726: 99.3574% ( 4) 00:09:52.584 19160.726 - 19266.005: 99.3845% ( 4) 00:09:52.584 19266.005 - 19371.284: 99.4048% ( 3) 00:09:52.584 19371.284 - 19476.562: 99.4251% ( 3) 00:09:52.584 19476.562 - 19581.841: 99.4521% ( 4) 00:09:52.584 19581.841 - 19687.120: 99.4724% ( 3) 00:09:52.584 19687.120 - 19792.398: 99.4995% ( 4) 00:09:52.584 19792.398 - 19897.677: 99.5198% ( 3) 00:09:52.584 19897.677 - 20002.956: 99.5468% ( 4) 00:09:52.584 20002.956 - 20108.235: 99.5671% ( 3) 00:09:52.584 26530.236 - 26635.515: 99.5874% ( 3) 00:09:52.584 26635.515 - 26740.794: 99.6009% ( 2) 00:09:52.584 26740.794 - 26846.072: 99.6212% ( 3) 00:09:52.584 26846.072 - 26951.351: 99.6415% ( 3) 00:09:52.584 26951.351 - 27161.908: 99.6753% ( 5) 00:09:52.584 27161.908 - 27372.466: 99.7024% ( 4) 00:09:52.584 27372.466 - 27583.023: 99.7362% ( 5) 00:09:52.584 27583.023 - 27793.581: 99.7700% ( 5) 00:09:52.584 27793.581 - 28004.138: 99.8038% ( 5) 00:09:52.584 28004.138 - 28214.696: 99.8377% ( 5) 00:09:52.584 28214.696 - 28425.253: 99.8715% ( 5) 00:09:52.584 28425.253 - 28635.810: 99.8985% ( 4) 00:09:52.584 28635.810 - 28846.368: 99.9324% ( 5) 00:09:52.584 28846.368 - 29056.925: 99.9594% ( 4) 00:09:52.584 29056.925 - 29267.483: 99.9932% ( 5) 00:09:52.584 29267.483 - 29478.040: 100.0000% ( 1) 00:09:52.584 00:09:52.584 03:20:15 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:52.584 00:09:52.584 real 0m2.722s 00:09:52.584 user 0m2.290s 00:09:52.584 sys 0m0.323s 00:09:52.584 03:20:15 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:52.584 03:20:15 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:52.584 ************************************ 00:09:52.584 END TEST nvme_perf 00:09:52.584 ************************************ 00:09:52.584 03:20:15 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:52.584 03:20:15 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:52.584 03:20:15 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:52.584 03:20:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:52.584 ************************************ 00:09:52.584 START TEST nvme_hello_world 00:09:52.584 ************************************ 00:09:52.584 03:20:16 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:52.843 Initializing NVMe Controllers 00:09:52.843 Attached to 0000:00:10.0 00:09:52.843 Namespace ID: 1 size: 6GB 00:09:52.843 Attached to 0000:00:11.0 00:09:52.843 Namespace ID: 1 size: 5GB 00:09:52.843 Attached to 0000:00:13.0 00:09:52.843 Namespace ID: 1 size: 1GB 00:09:52.843 Attached to 0000:00:12.0 00:09:52.843 Namespace ID: 1 size: 4GB 00:09:52.843 Namespace ID: 2 size: 4GB 00:09:52.843 Namespace ID: 3 size: 4GB 00:09:52.843 Initialization complete. 00:09:52.843 INFO: using host memory buffer for IO 00:09:52.843 Hello world! 00:09:52.843 INFO: using host memory buffer for IO 00:09:52.843 Hello world! 00:09:52.843 INFO: using host memory buffer for IO 00:09:52.843 Hello world! 00:09:52.843 INFO: using host memory buffer for IO 00:09:52.843 Hello world! 00:09:52.843 INFO: using host memory buffer for IO 00:09:52.843 Hello world! 00:09:52.843 INFO: using host memory buffer for IO 00:09:52.843 Hello world! 00:09:52.843 ************************************ 00:09:52.843 END TEST nvme_hello_world 00:09:52.843 ************************************ 00:09:52.843 00:09:52.843 real 0m0.310s 00:09:52.843 user 0m0.105s 00:09:52.843 sys 0m0.162s 00:09:52.843 03:20:16 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:52.843 03:20:16 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:52.843 03:20:16 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:52.843 03:20:16 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:52.843 03:20:16 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:52.843 03:20:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:52.843 ************************************ 00:09:52.844 START TEST nvme_sgl 00:09:52.844 ************************************ 00:09:52.844 03:20:16 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:53.104 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:53.104 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:53.104 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:53.104 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:53.104 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:53.363 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:53.363 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:53.363 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:53.363 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:53.363 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:53.363 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:53.363 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:53.363 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:53.363 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:53.363 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:53.363 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:53.363 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:53.363 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:53.363 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:53.363 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:53.363 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:53.363 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:53.363 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:53.363 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:53.363 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:53.363 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:53.363 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:53.363 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:53.363 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:53.363 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:53.363 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:53.363 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:53.363 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:53.363 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:53.363 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:53.363 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:53.363 NVMe Readv/Writev Request test 00:09:53.363 Attached to 0000:00:10.0 00:09:53.363 Attached to 0000:00:11.0 00:09:53.363 Attached to 0000:00:13.0 00:09:53.363 Attached to 0000:00:12.0 00:09:53.363 0000:00:10.0: build_io_request_2 test passed 00:09:53.363 0000:00:10.0: build_io_request_4 test passed 00:09:53.363 0000:00:10.0: build_io_request_5 test passed 00:09:53.363 0000:00:10.0: build_io_request_6 test passed 00:09:53.363 0000:00:10.0: build_io_request_7 test passed 00:09:53.363 0000:00:10.0: build_io_request_10 test passed 00:09:53.363 0000:00:11.0: build_io_request_2 test passed 00:09:53.363 0000:00:11.0: build_io_request_4 test passed 00:09:53.364 0000:00:11.0: build_io_request_5 test passed 00:09:53.364 0000:00:11.0: build_io_request_6 test passed 00:09:53.364 0000:00:11.0: build_io_request_7 test passed 00:09:53.364 0000:00:11.0: build_io_request_10 test passed 00:09:53.364 Cleaning up... 00:09:53.364 00:09:53.364 real 0m0.382s 00:09:53.364 user 0m0.193s 00:09:53.364 sys 0m0.144s 00:09:53.364 03:20:16 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:53.364 03:20:16 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:53.364 ************************************ 00:09:53.364 END TEST nvme_sgl 00:09:53.364 ************************************ 00:09:53.364 03:20:16 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:53.364 03:20:16 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:53.364 03:20:16 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:53.364 03:20:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:53.364 ************************************ 00:09:53.364 START TEST nvme_e2edp 00:09:53.364 ************************************ 00:09:53.364 03:20:16 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:53.622 NVMe Write/Read with End-to-End data protection test 00:09:53.622 Attached to 0000:00:10.0 00:09:53.622 Attached to 0000:00:11.0 00:09:53.622 Attached to 0000:00:13.0 00:09:53.622 Attached to 0000:00:12.0 00:09:53.622 Cleaning up... 00:09:53.622 ************************************ 00:09:53.622 END TEST nvme_e2edp 00:09:53.622 ************************************ 00:09:53.622 00:09:53.622 real 0m0.282s 00:09:53.622 user 0m0.096s 00:09:53.622 sys 0m0.144s 00:09:53.622 03:20:17 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:53.622 03:20:17 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:53.623 03:20:17 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:53.623 03:20:17 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:53.623 03:20:17 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:53.623 03:20:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:53.882 ************************************ 00:09:53.882 START TEST nvme_reserve 00:09:53.882 ************************************ 00:09:53.882 03:20:17 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:54.141 ===================================================== 00:09:54.141 NVMe Controller at PCI bus 0, device 16, function 0 00:09:54.141 ===================================================== 00:09:54.141 Reservations: Not Supported 00:09:54.141 ===================================================== 00:09:54.141 NVMe Controller at PCI bus 0, device 17, function 0 00:09:54.141 ===================================================== 00:09:54.141 Reservations: Not Supported 00:09:54.141 ===================================================== 00:09:54.141 NVMe Controller at PCI bus 0, device 19, function 0 00:09:54.141 ===================================================== 00:09:54.141 Reservations: Not Supported 00:09:54.141 ===================================================== 00:09:54.141 NVMe Controller at PCI bus 0, device 18, function 0 00:09:54.141 ===================================================== 00:09:54.141 Reservations: Not Supported 00:09:54.141 Reservation test passed 00:09:54.141 00:09:54.141 real 0m0.302s 00:09:54.141 user 0m0.109s 00:09:54.141 sys 0m0.152s 00:09:54.141 ************************************ 00:09:54.141 END TEST nvme_reserve 00:09:54.141 ************************************ 00:09:54.141 03:20:17 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:54.142 03:20:17 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:54.142 03:20:17 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:54.142 03:20:17 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:54.142 03:20:17 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:54.142 03:20:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:54.142 ************************************ 00:09:54.142 START TEST nvme_err_injection 00:09:54.142 ************************************ 00:09:54.142 03:20:17 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:54.400 NVMe Error Injection test 00:09:54.400 Attached to 0000:00:10.0 00:09:54.400 Attached to 0000:00:11.0 00:09:54.400 Attached to 0000:00:13.0 00:09:54.400 Attached to 0000:00:12.0 00:09:54.400 0000:00:12.0: get features failed as expected 00:09:54.400 0000:00:10.0: get features failed as expected 00:09:54.400 0000:00:11.0: get features failed as expected 00:09:54.400 0000:00:13.0: get features failed as expected 00:09:54.401 0000:00:13.0: get features successfully as expected 00:09:54.401 0000:00:12.0: get features successfully as expected 00:09:54.401 0000:00:10.0: get features successfully as expected 00:09:54.401 0000:00:11.0: get features successfully as expected 00:09:54.401 0000:00:13.0: read failed as expected 00:09:54.401 0000:00:10.0: read failed as expected 00:09:54.401 0000:00:11.0: read failed as expected 00:09:54.401 0000:00:12.0: read failed as expected 00:09:54.401 0000:00:12.0: read successfully as expected 00:09:54.401 0000:00:10.0: read successfully as expected 00:09:54.401 0000:00:11.0: read successfully as expected 00:09:54.401 0000:00:13.0: read successfully as expected 00:09:54.401 Cleaning up... 00:09:54.401 00:09:54.401 real 0m0.311s 00:09:54.401 user 0m0.111s 00:09:54.401 sys 0m0.163s 00:09:54.401 ************************************ 00:09:54.401 END TEST nvme_err_injection 00:09:54.401 ************************************ 00:09:54.401 03:20:17 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:54.401 03:20:17 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:54.401 03:20:17 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:54.401 03:20:17 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:09:54.401 03:20:17 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:54.401 03:20:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:54.401 ************************************ 00:09:54.401 START TEST nvme_overhead 00:09:54.401 ************************************ 00:09:54.401 03:20:17 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:55.778 Initializing NVMe Controllers 00:09:55.778 Attached to 0000:00:10.0 00:09:55.778 Attached to 0000:00:11.0 00:09:55.778 Attached to 0000:00:13.0 00:09:55.778 Attached to 0000:00:12.0 00:09:55.778 Initialization complete. Launching workers. 00:09:55.778 submit (in ns) avg, min, max = 13530.0, 12243.4, 81843.4 00:09:55.778 complete (in ns) avg, min, max = 9060.8, 8276.3, 1392420.9 00:09:55.778 00:09:55.778 Submit histogram 00:09:55.778 ================ 00:09:55.778 Range in us Cumulative Count 00:09:55.778 12.235 - 12.286: 0.0175% ( 1) 00:09:55.778 12.440 - 12.492: 0.0350% ( 1) 00:09:55.778 12.492 - 12.543: 0.0525% ( 1) 00:09:55.778 12.543 - 12.594: 0.1226% ( 4) 00:09:55.778 12.594 - 12.646: 0.4028% ( 16) 00:09:55.778 12.646 - 12.697: 0.7005% ( 17) 00:09:55.779 12.697 - 12.749: 1.6112% ( 52) 00:09:55.779 12.749 - 12.800: 3.6778% ( 118) 00:09:55.779 12.800 - 12.851: 7.2504% ( 204) 00:09:55.779 12.851 - 12.903: 12.6095% ( 306) 00:09:55.779 12.903 - 12.954: 19.9124% ( 417) 00:09:55.779 12.954 - 13.006: 27.8809% ( 455) 00:09:55.779 13.006 - 13.057: 36.0771% ( 468) 00:09:55.779 13.057 - 13.108: 42.4168% ( 362) 00:09:55.779 13.108 - 13.160: 48.6865% ( 358) 00:09:55.779 13.160 - 13.263: 61.1734% ( 713) 00:09:55.779 13.263 - 13.365: 74.5884% ( 766) 00:09:55.779 13.365 - 13.468: 84.6760% ( 576) 00:09:55.779 13.468 - 13.571: 89.5622% ( 279) 00:09:55.779 13.571 - 13.674: 92.2067% ( 151) 00:09:55.779 13.674 - 13.777: 93.5377% ( 76) 00:09:55.779 13.777 - 13.880: 94.0455% ( 29) 00:09:55.779 13.880 - 13.982: 94.2032% ( 9) 00:09:55.779 13.982 - 14.085: 94.4133% ( 12) 00:09:55.779 14.188 - 14.291: 94.4483% ( 2) 00:09:55.779 14.291 - 14.394: 94.4658% ( 1) 00:09:55.779 14.702 - 14.805: 94.5009% ( 2) 00:09:55.779 14.908 - 15.010: 94.5184% ( 1) 00:09:55.779 15.319 - 15.422: 94.5359% ( 1) 00:09:55.779 16.553 - 16.655: 94.5709% ( 2) 00:09:55.779 16.655 - 16.758: 94.6060% ( 2) 00:09:55.779 16.758 - 16.861: 94.6235% ( 1) 00:09:55.779 16.861 - 16.964: 94.6760% ( 3) 00:09:55.779 17.067 - 17.169: 94.7110% ( 2) 00:09:55.779 17.272 - 17.375: 94.7285% ( 1) 00:09:55.779 17.375 - 17.478: 94.8336% ( 6) 00:09:55.779 17.684 - 17.786: 94.8511% ( 1) 00:09:55.779 17.786 - 17.889: 94.9387% ( 5) 00:09:55.779 17.889 - 17.992: 95.0963% ( 9) 00:09:55.779 17.992 - 18.095: 95.2890% ( 11) 00:09:55.779 18.095 - 18.198: 95.6042% ( 18) 00:09:55.779 18.198 - 18.300: 95.8319% ( 13) 00:09:55.779 18.300 - 18.403: 96.1646% ( 19) 00:09:55.779 18.403 - 18.506: 96.4448% ( 16) 00:09:55.779 18.506 - 18.609: 96.8126% ( 21) 00:09:55.779 18.609 - 18.712: 97.1454% ( 19) 00:09:55.779 18.712 - 18.814: 97.4431% ( 17) 00:09:55.779 18.814 - 18.917: 97.6007% ( 9) 00:09:55.779 18.917 - 19.020: 97.9159% ( 18) 00:09:55.779 19.020 - 19.123: 98.0560% ( 8) 00:09:55.779 19.123 - 19.226: 98.2137% ( 9) 00:09:55.779 19.226 - 19.329: 98.3363% ( 7) 00:09:55.779 19.329 - 19.431: 98.3713% ( 2) 00:09:55.779 19.431 - 19.534: 98.4413% ( 4) 00:09:55.779 19.534 - 19.637: 98.5289% ( 5) 00:09:55.779 19.637 - 19.740: 98.5639% ( 2) 00:09:55.779 19.740 - 19.843: 98.6690% ( 6) 00:09:55.779 19.843 - 19.945: 98.7391% ( 4) 00:09:55.779 19.945 - 20.048: 98.8266% ( 5) 00:09:55.779 20.048 - 20.151: 98.8616% ( 2) 00:09:55.779 20.151 - 20.254: 98.8792% ( 1) 00:09:55.779 20.254 - 20.357: 98.8967% ( 1) 00:09:55.779 20.357 - 20.459: 98.9142% ( 1) 00:09:55.779 20.459 - 20.562: 98.9492% ( 2) 00:09:55.779 20.562 - 20.665: 98.9842% ( 2) 00:09:55.779 20.665 - 20.768: 99.0193% ( 2) 00:09:55.779 20.768 - 20.871: 99.0543% ( 2) 00:09:55.779 20.871 - 20.973: 99.0893% ( 2) 00:09:55.779 20.973 - 21.076: 99.1243% ( 2) 00:09:55.779 21.076 - 21.179: 99.1594% ( 2) 00:09:55.779 21.179 - 21.282: 99.1769% ( 1) 00:09:55.779 21.282 - 21.385: 99.2119% ( 2) 00:09:55.779 21.385 - 21.488: 99.2294% ( 1) 00:09:55.779 21.590 - 21.693: 99.2820% ( 3) 00:09:55.779 21.693 - 21.796: 99.3170% ( 2) 00:09:55.779 21.796 - 21.899: 99.3695% ( 3) 00:09:55.779 21.899 - 22.002: 99.4046% ( 2) 00:09:55.779 22.002 - 22.104: 99.4221% ( 1) 00:09:55.779 22.104 - 22.207: 99.4746% ( 3) 00:09:55.779 22.207 - 22.310: 99.5096% ( 2) 00:09:55.779 22.310 - 22.413: 99.5447% ( 2) 00:09:55.779 22.413 - 22.516: 99.5622% ( 1) 00:09:55.779 22.516 - 22.618: 99.5797% ( 1) 00:09:55.779 22.721 - 22.824: 99.5972% ( 1) 00:09:55.779 23.235 - 23.338: 99.6147% ( 1) 00:09:55.779 23.544 - 23.647: 99.6322% ( 1) 00:09:55.779 23.852 - 23.955: 99.6497% ( 1) 00:09:55.779 24.058 - 24.161: 99.6673% ( 1) 00:09:55.779 24.263 - 24.366: 99.7023% ( 2) 00:09:55.779 24.366 - 24.469: 99.7198% ( 1) 00:09:55.779 24.469 - 24.572: 99.7548% ( 2) 00:09:55.779 24.778 - 24.880: 99.7898% ( 2) 00:09:55.779 25.292 - 25.394: 99.8074% ( 1) 00:09:55.779 25.703 - 25.806: 99.8249% ( 1) 00:09:55.779 25.806 - 25.908: 99.8424% ( 1) 00:09:55.779 26.011 - 26.114: 99.8599% ( 1) 00:09:55.779 26.731 - 26.937: 99.8774% ( 1) 00:09:55.779 27.348 - 27.553: 99.8949% ( 1) 00:09:55.779 29.610 - 29.815: 99.9124% ( 1) 00:09:55.779 30.843 - 31.049: 99.9299% ( 1) 00:09:55.779 33.105 - 33.311: 99.9475% ( 1) 00:09:55.779 42.769 - 42.975: 99.9650% ( 1) 00:09:55.779 46.676 - 46.882: 99.9825% ( 1) 00:09:55.779 81.838 - 82.249: 100.0000% ( 1) 00:09:55.779 00:09:55.779 Complete histogram 00:09:55.779 ================== 00:09:55.779 Range in us Cumulative Count 00:09:55.779 8.276 - 8.328: 0.0175% ( 1) 00:09:55.779 8.328 - 8.379: 0.1226% ( 6) 00:09:55.779 8.379 - 8.431: 4.6760% ( 260) 00:09:55.779 8.431 - 8.482: 12.4343% ( 443) 00:09:55.779 8.482 - 8.533: 23.1349% ( 611) 00:09:55.779 8.533 - 8.585: 38.7391% ( 891) 00:09:55.779 8.585 - 8.636: 54.7986% ( 917) 00:09:55.779 8.636 - 8.688: 66.5674% ( 672) 00:09:55.779 8.688 - 8.739: 75.0963% ( 487) 00:09:55.779 8.739 - 8.790: 82.6445% ( 431) 00:09:55.779 8.790 - 8.842: 88.3888% ( 328) 00:09:55.779 8.842 - 8.893: 91.5762% ( 182) 00:09:55.779 8.893 - 8.945: 93.3800% ( 103) 00:09:55.779 8.945 - 8.996: 94.6760% ( 74) 00:09:55.779 8.996 - 9.047: 95.3765% ( 40) 00:09:55.779 9.047 - 9.099: 95.7443% ( 21) 00:09:55.779 9.099 - 9.150: 96.1121% ( 21) 00:09:55.779 9.150 - 9.202: 96.2522% ( 8) 00:09:55.779 9.202 - 9.253: 96.4448% ( 11) 00:09:55.779 9.253 - 9.304: 96.6200% ( 10) 00:09:55.779 9.304 - 9.356: 96.8651% ( 14) 00:09:55.779 9.356 - 9.407: 97.1278% ( 15) 00:09:55.779 9.407 - 9.459: 97.2680% ( 8) 00:09:55.779 9.459 - 9.510: 97.4431% ( 10) 00:09:55.779 9.510 - 9.561: 97.6357% ( 11) 00:09:55.779 9.561 - 9.613: 97.7758% ( 8) 00:09:55.779 9.613 - 9.664: 97.8109% ( 2) 00:09:55.779 9.664 - 9.716: 97.9510% ( 8) 00:09:55.779 9.716 - 9.767: 97.9860% ( 2) 00:09:55.779 9.767 - 9.818: 98.0210% ( 2) 00:09:55.779 9.870 - 9.921: 98.0385% ( 1) 00:09:55.779 9.973 - 10.024: 98.0736% ( 2) 00:09:55.779 10.230 - 10.281: 98.0911% ( 1) 00:09:55.779 10.384 - 10.435: 98.1086% ( 1) 00:09:55.779 10.487 - 10.538: 98.1261% ( 1) 00:09:55.779 10.898 - 10.949: 98.1436% ( 1) 00:09:55.779 11.052 - 11.104: 98.1611% ( 1) 00:09:55.779 11.309 - 11.361: 98.1786% ( 1) 00:09:55.779 11.361 - 11.412: 98.1961% ( 1) 00:09:55.779 11.463 - 11.515: 98.2137% ( 1) 00:09:55.779 12.492 - 12.543: 98.2312% ( 1) 00:09:55.779 13.006 - 13.057: 98.2487% ( 1) 00:09:55.779 13.160 - 13.263: 98.2662% ( 1) 00:09:55.779 13.365 - 13.468: 98.2837% ( 1) 00:09:55.779 13.777 - 13.880: 98.3012% ( 1) 00:09:55.779 13.982 - 14.085: 98.3363% ( 2) 00:09:55.779 14.085 - 14.188: 98.4063% ( 4) 00:09:55.779 14.188 - 14.291: 98.5114% ( 6) 00:09:55.779 14.291 - 14.394: 98.6515% ( 8) 00:09:55.779 14.394 - 14.496: 98.7215% ( 4) 00:09:55.779 14.496 - 14.599: 98.8091% ( 5) 00:09:55.779 14.599 - 14.702: 98.8441% ( 2) 00:09:55.779 14.702 - 14.805: 98.9317% ( 5) 00:09:55.779 14.805 - 14.908: 99.0193% ( 5) 00:09:55.779 14.908 - 15.010: 99.1594% ( 8) 00:09:55.779 15.010 - 15.113: 99.2294% ( 4) 00:09:55.779 15.113 - 15.216: 99.3345% ( 6) 00:09:55.779 15.216 - 15.319: 99.3520% ( 1) 00:09:55.779 15.319 - 15.422: 99.4221% ( 4) 00:09:55.779 15.422 - 15.524: 99.4746% ( 3) 00:09:55.779 15.524 - 15.627: 99.4921% ( 1) 00:09:55.779 15.627 - 15.730: 99.5096% ( 1) 00:09:55.779 15.833 - 15.936: 99.5271% ( 1) 00:09:55.779 16.141 - 16.244: 99.5622% ( 2) 00:09:55.779 16.347 - 16.450: 99.5797% ( 1) 00:09:55.779 16.861 - 16.964: 99.5972% ( 1) 00:09:55.779 16.964 - 17.067: 99.6147% ( 1) 00:09:55.779 17.684 - 17.786: 99.6322% ( 1) 00:09:55.779 18.300 - 18.403: 99.6497% ( 1) 00:09:55.779 18.506 - 18.609: 99.6673% ( 1) 00:09:55.779 19.123 - 19.226: 99.6848% ( 1) 00:09:55.779 19.740 - 19.843: 99.7023% ( 1) 00:09:55.779 19.843 - 19.945: 99.7198% ( 1) 00:09:55.779 19.945 - 20.048: 99.7373% ( 1) 00:09:55.779 20.048 - 20.151: 99.7548% ( 1) 00:09:55.779 20.254 - 20.357: 99.7723% ( 1) 00:09:55.779 20.459 - 20.562: 99.7898% ( 1) 00:09:55.779 25.292 - 25.394: 99.8074% ( 1) 00:09:55.779 25.394 - 25.497: 99.8249% ( 1) 00:09:55.779 27.965 - 28.170: 99.8424% ( 1) 00:09:55.779 28.787 - 28.993: 99.8599% ( 1) 00:09:55.779 30.432 - 30.638: 99.8774% ( 1) 00:09:55.779 35.778 - 35.984: 99.8949% ( 1) 00:09:55.779 37.629 - 37.835: 99.9124% ( 1) 00:09:55.779 45.031 - 45.237: 99.9299% ( 1) 00:09:55.780 53.873 - 54.284: 99.9475% ( 1) 00:09:55.780 61.276 - 61.687: 99.9650% ( 1) 00:09:55.780 69.912 - 70.323: 99.9825% ( 1) 00:09:55.780 1388.363 - 1394.943: 100.0000% ( 1) 00:09:55.780 00:09:55.780 00:09:55.780 real 0m1.311s 00:09:55.780 user 0m1.091s 00:09:55.780 sys 0m0.167s 00:09:55.780 ************************************ 00:09:55.780 END TEST nvme_overhead 00:09:55.780 ************************************ 00:09:55.780 03:20:19 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:55.780 03:20:19 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:55.780 03:20:19 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:55.780 03:20:19 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:09:55.780 03:20:19 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:55.780 03:20:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:55.780 ************************************ 00:09:55.780 START TEST nvme_arbitration 00:09:55.780 ************************************ 00:09:56.039 03:20:19 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:59.328 Initializing NVMe Controllers 00:09:59.328 Attached to 0000:00:10.0 00:09:59.328 Attached to 0000:00:11.0 00:09:59.328 Attached to 0000:00:13.0 00:09:59.328 Attached to 0000:00:12.0 00:09:59.328 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:59.328 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:59.328 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:59.328 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:59.328 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:59.329 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:59.329 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:59.329 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:59.329 Initialization complete. Launching workers. 00:09:59.329 Starting thread on core 1 with urgent priority queue 00:09:59.329 Starting thread on core 2 with urgent priority queue 00:09:59.329 Starting thread on core 3 with urgent priority queue 00:09:59.329 Starting thread on core 0 with urgent priority queue 00:09:59.329 QEMU NVMe Ctrl (12340 ) core 0: 661.33 IO/s 151.21 secs/100000 ios 00:09:59.329 QEMU NVMe Ctrl (12342 ) core 0: 661.33 IO/s 151.21 secs/100000 ios 00:09:59.329 QEMU NVMe Ctrl (12341 ) core 1: 640.00 IO/s 156.25 secs/100000 ios 00:09:59.329 QEMU NVMe Ctrl (12342 ) core 1: 640.00 IO/s 156.25 secs/100000 ios 00:09:59.329 QEMU NVMe Ctrl (12343 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:09:59.329 QEMU NVMe Ctrl (12342 ) core 3: 448.00 IO/s 223.21 secs/100000 ios 00:09:59.329 ======================================================== 00:09:59.329 00:09:59.329 ************************************ 00:09:59.329 END TEST nvme_arbitration 00:09:59.329 ************************************ 00:09:59.329 00:09:59.329 real 0m3.497s 00:09:59.329 user 0m9.525s 00:09:59.329 sys 0m0.177s 00:09:59.329 03:20:22 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:59.329 03:20:22 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:59.587 03:20:22 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:59.587 03:20:22 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:59.587 03:20:22 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.587 03:20:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:59.587 ************************************ 00:09:59.587 START TEST nvme_single_aen 00:09:59.587 ************************************ 00:09:59.587 03:20:22 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:59.846 Asynchronous Event Request test 00:09:59.846 Attached to 0000:00:10.0 00:09:59.846 Attached to 0000:00:11.0 00:09:59.846 Attached to 0000:00:13.0 00:09:59.846 Attached to 0000:00:12.0 00:09:59.846 Reset controller to setup AER completions for this process 00:09:59.846 Registering asynchronous event callbacks... 00:09:59.846 Getting orig temperature thresholds of all controllers 00:09:59.846 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:59.846 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:59.846 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:59.846 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:59.846 Setting all controllers temperature threshold low to trigger AER 00:09:59.846 Waiting for all controllers temperature threshold to be set lower 00:09:59.846 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:59.846 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:59.846 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:59.846 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:59.846 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:59.846 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:59.846 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:59.846 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:59.846 Waiting for all controllers to trigger AER and reset threshold 00:09:59.846 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:59.846 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:59.846 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:59.846 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:59.846 Cleaning up... 00:09:59.846 00:09:59.846 real 0m0.302s 00:09:59.846 user 0m0.103s 00:09:59.846 sys 0m0.148s 00:09:59.846 ************************************ 00:09:59.846 END TEST nvme_single_aen 00:09:59.846 ************************************ 00:09:59.846 03:20:23 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:59.846 03:20:23 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:59.846 03:20:23 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:59.846 03:20:23 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:59.846 03:20:23 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.846 03:20:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:59.846 ************************************ 00:09:59.846 START TEST nvme_doorbell_aers 00:09:59.846 ************************************ 00:09:59.846 03:20:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:09:59.846 03:20:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:59.846 03:20:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:59.846 03:20:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:59.846 03:20:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:59.846 03:20:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:59.846 03:20:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:09:59.846 03:20:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:59.847 03:20:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:59.847 03:20:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:59.847 03:20:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:59.847 03:20:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:59.847 03:20:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:59.847 03:20:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:00.451 [2024-11-05 03:20:23.730870] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:10.431 Executing: test_write_invalid_db 00:10:10.431 Waiting for AER completion... 00:10:10.431 Failure: test_write_invalid_db 00:10:10.431 00:10:10.431 Executing: test_invalid_db_write_overflow_sq 00:10:10.431 Waiting for AER completion... 00:10:10.431 Failure: test_invalid_db_write_overflow_sq 00:10:10.431 00:10:10.431 Executing: test_invalid_db_write_overflow_cq 00:10:10.431 Waiting for AER completion... 00:10:10.431 Failure: test_invalid_db_write_overflow_cq 00:10:10.431 00:10:10.431 03:20:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:10.431 03:20:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:10.431 [2024-11-05 03:20:33.786866] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:20.412 Executing: test_write_invalid_db 00:10:20.412 Waiting for AER completion... 00:10:20.412 Failure: test_write_invalid_db 00:10:20.412 00:10:20.412 Executing: test_invalid_db_write_overflow_sq 00:10:20.412 Waiting for AER completion... 00:10:20.412 Failure: test_invalid_db_write_overflow_sq 00:10:20.412 00:10:20.412 Executing: test_invalid_db_write_overflow_cq 00:10:20.412 Waiting for AER completion... 00:10:20.412 Failure: test_invalid_db_write_overflow_cq 00:10:20.412 00:10:20.412 03:20:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:20.412 03:20:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:20.412 [2024-11-05 03:20:43.881579] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:30.388 Executing: test_write_invalid_db 00:10:30.388 Waiting for AER completion... 00:10:30.388 Failure: test_write_invalid_db 00:10:30.388 00:10:30.388 Executing: test_invalid_db_write_overflow_sq 00:10:30.388 Waiting for AER completion... 00:10:30.388 Failure: test_invalid_db_write_overflow_sq 00:10:30.388 00:10:30.388 Executing: test_invalid_db_write_overflow_cq 00:10:30.388 Waiting for AER completion... 00:10:30.388 Failure: test_invalid_db_write_overflow_cq 00:10:30.388 00:10:30.388 03:20:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:30.388 03:20:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:30.388 [2024-11-05 03:20:53.913894] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:40.372 Executing: test_write_invalid_db 00:10:40.372 Waiting for AER completion... 00:10:40.372 Failure: test_write_invalid_db 00:10:40.372 00:10:40.372 Executing: test_invalid_db_write_overflow_sq 00:10:40.372 Waiting for AER completion... 00:10:40.372 Failure: test_invalid_db_write_overflow_sq 00:10:40.372 00:10:40.372 Executing: test_invalid_db_write_overflow_cq 00:10:40.372 Waiting for AER completion... 00:10:40.372 Failure: test_invalid_db_write_overflow_cq 00:10:40.372 00:10:40.372 ************************************ 00:10:40.372 END TEST nvme_doorbell_aers 00:10:40.372 ************************************ 00:10:40.372 00:10:40.372 real 0m40.359s 00:10:40.372 user 0m28.459s 00:10:40.372 sys 0m11.515s 00:10:40.372 03:21:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:40.372 03:21:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:40.372 03:21:03 nvme -- nvme/nvme.sh@97 -- # uname 00:10:40.372 03:21:03 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:40.372 03:21:03 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:40.372 03:21:03 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:10:40.372 03:21:03 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:40.372 03:21:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:40.372 ************************************ 00:10:40.372 START TEST nvme_multi_aen 00:10:40.372 ************************************ 00:10:40.372 03:21:03 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:40.631 [2024-11-05 03:21:04.001799] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:40.631 [2024-11-05 03:21:04.001886] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:40.631 [2024-11-05 03:21:04.001927] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:40.631 [2024-11-05 03:21:04.003767] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:40.631 [2024-11-05 03:21:04.003813] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:40.631 [2024-11-05 03:21:04.003828] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:40.631 [2024-11-05 03:21:04.005403] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:40.631 [2024-11-05 03:21:04.005446] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:40.631 [2024-11-05 03:21:04.005463] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:40.631 [2024-11-05 03:21:04.006967] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:40.631 [2024-11-05 03:21:04.007011] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:40.631 [2024-11-05 03:21:04.007026] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:40.631 Child process pid: 65153 00:10:40.891 [Child] Asynchronous Event Request test 00:10:40.891 [Child] Attached to 0000:00:10.0 00:10:40.891 [Child] Attached to 0000:00:11.0 00:10:40.891 [Child] Attached to 0000:00:13.0 00:10:40.891 [Child] Attached to 0000:00:12.0 00:10:40.891 [Child] Registering asynchronous event callbacks... 00:10:40.892 [Child] Getting orig temperature thresholds of all controllers 00:10:40.892 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:40.892 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:40.892 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:40.892 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:40.892 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:40.892 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:40.892 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:40.892 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:40.892 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:40.892 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:40.892 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:40.892 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:40.892 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:40.892 [Child] Cleaning up... 00:10:40.892 Asynchronous Event Request test 00:10:40.892 Attached to 0000:00:10.0 00:10:40.892 Attached to 0000:00:11.0 00:10:40.892 Attached to 0000:00:13.0 00:10:40.892 Attached to 0000:00:12.0 00:10:40.892 Reset controller to setup AER completions for this process 00:10:40.892 Registering asynchronous event callbacks... 00:10:40.892 Getting orig temperature thresholds of all controllers 00:10:40.892 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:40.892 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:40.892 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:40.892 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:40.892 Setting all controllers temperature threshold low to trigger AER 00:10:40.892 Waiting for all controllers temperature threshold to be set lower 00:10:40.892 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:40.892 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:40.892 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:40.892 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:40.892 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:40.892 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:40.892 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:40.892 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:40.892 Waiting for all controllers to trigger AER and reset threshold 00:10:40.892 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:40.892 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:40.892 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:40.892 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:40.892 Cleaning up... 00:10:40.892 00:10:40.892 real 0m0.617s 00:10:40.892 user 0m0.194s 00:10:40.892 sys 0m0.316s 00:10:40.892 03:21:04 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:40.892 03:21:04 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:40.892 ************************************ 00:10:40.892 END TEST nvme_multi_aen 00:10:40.892 ************************************ 00:10:40.892 03:21:04 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:40.892 03:21:04 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:40.892 03:21:04 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:40.892 03:21:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:40.892 ************************************ 00:10:40.892 START TEST nvme_startup 00:10:40.892 ************************************ 00:10:40.892 03:21:04 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:41.151 Initializing NVMe Controllers 00:10:41.151 Attached to 0000:00:10.0 00:10:41.151 Attached to 0000:00:11.0 00:10:41.151 Attached to 0000:00:13.0 00:10:41.151 Attached to 0000:00:12.0 00:10:41.151 Initialization complete. 00:10:41.151 Time used:191568.266 (us). 00:10:41.410 00:10:41.410 real 0m0.294s 00:10:41.410 user 0m0.101s 00:10:41.410 sys 0m0.150s 00:10:41.410 ************************************ 00:10:41.410 END TEST nvme_startup 00:10:41.410 ************************************ 00:10:41.410 03:21:04 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:41.410 03:21:04 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:41.410 03:21:04 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:41.410 03:21:04 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:41.410 03:21:04 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:41.410 03:21:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:41.410 ************************************ 00:10:41.410 START TEST nvme_multi_secondary 00:10:41.410 ************************************ 00:10:41.410 03:21:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:10:41.410 03:21:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65209 00:10:41.410 03:21:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:41.410 03:21:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65210 00:10:41.410 03:21:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:41.410 03:21:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:44.700 Initializing NVMe Controllers 00:10:44.700 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:44.700 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:44.700 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:44.700 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:44.700 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:44.700 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:44.700 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:44.700 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:44.700 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:44.700 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:44.700 Initialization complete. Launching workers. 00:10:44.700 ======================================================== 00:10:44.700 Latency(us) 00:10:44.700 Device Information : IOPS MiB/s Average min max 00:10:44.700 PCIE (0000:00:10.0) NSID 1 from core 2: 3439.39 13.44 4644.73 1171.85 11058.26 00:10:44.700 PCIE (0000:00:11.0) NSID 1 from core 2: 3439.39 13.44 4645.34 1330.89 10944.68 00:10:44.700 PCIE (0000:00:13.0) NSID 1 from core 2: 3439.39 13.44 4644.92 1326.56 11468.22 00:10:44.700 PCIE (0000:00:12.0) NSID 1 from core 2: 3439.39 13.44 4645.47 1367.45 11040.92 00:10:44.700 PCIE (0000:00:12.0) NSID 2 from core 2: 3439.39 13.44 4645.35 1310.45 11150.89 00:10:44.700 PCIE (0000:00:12.0) NSID 3 from core 2: 3439.39 13.44 4645.41 1309.84 11351.41 00:10:44.700 ======================================================== 00:10:44.700 Total : 20636.35 80.61 4645.20 1171.85 11468.22 00:10:44.700 00:10:44.958 03:21:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65209 00:10:44.958 Initializing NVMe Controllers 00:10:44.959 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:44.959 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:44.959 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:44.959 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:44.959 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:44.959 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:44.959 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:44.959 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:44.959 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:44.959 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:44.959 Initialization complete. Launching workers. 00:10:44.959 ======================================================== 00:10:44.959 Latency(us) 00:10:44.959 Device Information : IOPS MiB/s Average min max 00:10:44.959 PCIE (0000:00:10.0) NSID 1 from core 1: 4993.42 19.51 3201.84 1513.79 5935.36 00:10:44.959 PCIE (0000:00:11.0) NSID 1 from core 1: 4993.42 19.51 3203.64 1521.89 6123.03 00:10:44.959 PCIE (0000:00:13.0) NSID 1 from core 1: 4993.42 19.51 3203.66 1519.15 6210.52 00:10:44.959 PCIE (0000:00:12.0) NSID 1 from core 1: 4993.42 19.51 3203.65 1522.74 6301.44 00:10:44.959 PCIE (0000:00:12.0) NSID 2 from core 1: 4993.42 19.51 3203.71 1570.62 5661.69 00:10:44.959 PCIE (0000:00:12.0) NSID 3 from core 1: 4993.42 19.51 3203.88 1500.19 5755.54 00:10:44.959 ======================================================== 00:10:44.959 Total : 29960.54 117.03 3203.40 1500.19 6301.44 00:10:44.959 00:10:46.862 Initializing NVMe Controllers 00:10:46.862 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:46.862 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:46.862 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:46.862 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:46.862 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:46.862 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:46.862 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:46.862 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:46.862 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:46.862 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:46.862 Initialization complete. Launching workers. 00:10:46.862 ======================================================== 00:10:46.862 Latency(us) 00:10:46.862 Device Information : IOPS MiB/s Average min max 00:10:46.862 PCIE (0000:00:10.0) NSID 1 from core 0: 8501.64 33.21 1880.56 907.86 5960.43 00:10:46.862 PCIE (0000:00:11.0) NSID 1 from core 0: 8501.64 33.21 1881.54 936.57 5911.39 00:10:46.862 PCIE (0000:00:13.0) NSID 1 from core 0: 8501.64 33.21 1881.52 886.18 5974.83 00:10:46.862 PCIE (0000:00:12.0) NSID 1 from core 0: 8501.64 33.21 1881.49 816.85 6188.91 00:10:46.862 PCIE (0000:00:12.0) NSID 2 from core 0: 8501.64 33.21 1881.46 773.06 6521.92 00:10:46.862 PCIE (0000:00:12.0) NSID 3 from core 0: 8501.64 33.21 1881.44 727.04 6896.52 00:10:46.862 ======================================================== 00:10:46.862 Total : 51009.86 199.26 1881.34 727.04 6896.52 00:10:46.862 00:10:46.862 03:21:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65210 00:10:46.862 03:21:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65279 00:10:46.862 03:21:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:46.862 03:21:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65280 00:10:46.862 03:21:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:46.862 03:21:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:50.150 Initializing NVMe Controllers 00:10:50.150 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:50.150 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:50.150 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:50.150 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:50.150 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:50.150 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:50.150 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:50.150 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:50.150 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:50.150 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:50.150 Initialization complete. Launching workers. 00:10:50.150 ======================================================== 00:10:50.150 Latency(us) 00:10:50.150 Device Information : IOPS MiB/s Average min max 00:10:50.150 PCIE (0000:00:10.0) NSID 1 from core 1: 5407.47 21.12 2956.66 961.38 6561.89 00:10:50.150 PCIE (0000:00:11.0) NSID 1 from core 1: 5407.47 21.12 2958.39 996.59 6128.56 00:10:50.150 PCIE (0000:00:13.0) NSID 1 from core 1: 5407.47 21.12 2958.52 1000.03 6099.00 00:10:50.150 PCIE (0000:00:12.0) NSID 1 from core 1: 5407.47 21.12 2958.57 990.19 6516.44 00:10:50.150 PCIE (0000:00:12.0) NSID 2 from core 1: 5407.47 21.12 2958.72 979.13 6999.31 00:10:50.150 PCIE (0000:00:12.0) NSID 3 from core 1: 5407.47 21.12 2958.85 976.05 6173.80 00:10:50.150 ======================================================== 00:10:50.150 Total : 32444.84 126.74 2958.29 961.38 6999.31 00:10:50.150 00:10:50.150 Initializing NVMe Controllers 00:10:50.150 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:50.150 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:50.150 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:50.150 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:50.150 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:50.150 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:50.150 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:50.150 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:50.150 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:50.151 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:50.151 Initialization complete. Launching workers. 00:10:50.151 ======================================================== 00:10:50.151 Latency(us) 00:10:50.151 Device Information : IOPS MiB/s Average min max 00:10:50.151 PCIE (0000:00:10.0) NSID 1 from core 0: 5259.68 20.55 3039.61 1006.01 6675.31 00:10:50.151 PCIE (0000:00:11.0) NSID 1 from core 0: 5259.68 20.55 3041.38 1033.00 6417.63 00:10:50.151 PCIE (0000:00:13.0) NSID 1 from core 0: 5259.68 20.55 3041.62 1021.70 6531.31 00:10:50.151 PCIE (0000:00:12.0) NSID 1 from core 0: 5259.68 20.55 3041.72 1015.64 6551.67 00:10:50.151 PCIE (0000:00:12.0) NSID 2 from core 0: 5259.68 20.55 3041.88 1010.66 6990.82 00:10:50.151 PCIE (0000:00:12.0) NSID 3 from core 0: 5259.68 20.55 3041.99 1017.69 7074.09 00:10:50.151 ======================================================== 00:10:50.151 Total : 31558.09 123.27 3041.37 1006.01 7074.09 00:10:50.151 00:10:52.055 Initializing NVMe Controllers 00:10:52.055 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:52.055 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:52.055 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:52.055 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:52.055 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:52.055 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:52.055 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:52.055 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:52.055 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:52.055 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:52.055 Initialization complete. Launching workers. 00:10:52.055 ======================================================== 00:10:52.055 Latency(us) 00:10:52.055 Device Information : IOPS MiB/s Average min max 00:10:52.055 PCIE (0000:00:10.0) NSID 1 from core 2: 3437.17 13.43 4653.26 1052.25 11450.52 00:10:52.055 PCIE (0000:00:11.0) NSID 1 from core 2: 3437.17 13.43 4654.80 1066.87 11161.17 00:10:52.055 PCIE (0000:00:13.0) NSID 1 from core 2: 3437.17 13.43 4654.77 1101.55 11157.24 00:10:52.055 PCIE (0000:00:12.0) NSID 1 from core 2: 3437.17 13.43 4654.70 1121.19 11476.33 00:10:52.055 PCIE (0000:00:12.0) NSID 2 from core 2: 3437.17 13.43 4654.64 1109.16 10922.18 00:10:52.055 PCIE (0000:00:12.0) NSID 3 from core 2: 3437.17 13.43 4654.57 1088.56 11627.51 00:10:52.055 ======================================================== 00:10:52.055 Total : 20622.99 80.56 4654.45 1052.25 11627.51 00:10:52.055 00:10:52.314 ************************************ 00:10:52.314 END TEST nvme_multi_secondary 00:10:52.314 ************************************ 00:10:52.314 03:21:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65279 00:10:52.314 03:21:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65280 00:10:52.314 00:10:52.314 real 0m10.844s 00:10:52.314 user 0m18.545s 00:10:52.314 sys 0m1.143s 00:10:52.314 03:21:15 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:52.314 03:21:15 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:52.314 03:21:15 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:52.314 03:21:15 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:52.314 03:21:15 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/64208 ]] 00:10:52.314 03:21:15 nvme -- common/autotest_common.sh@1092 -- # kill 64208 00:10:52.314 03:21:15 nvme -- common/autotest_common.sh@1093 -- # wait 64208 00:10:52.314 [2024-11-05 03:21:15.729670] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.314 [2024-11-05 03:21:15.729810] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.314 [2024-11-05 03:21:15.729891] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.314 [2024-11-05 03:21:15.729947] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.314 [2024-11-05 03:21:15.736190] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.314 [2024-11-05 03:21:15.736268] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.314 [2024-11-05 03:21:15.736317] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.314 [2024-11-05 03:21:15.736351] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.314 [2024-11-05 03:21:15.741373] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.314 [2024-11-05 03:21:15.741454] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.315 [2024-11-05 03:21:15.741486] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.315 [2024-11-05 03:21:15.741527] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.315 [2024-11-05 03:21:15.745795] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.315 [2024-11-05 03:21:15.745851] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.315 [2024-11-05 03:21:15.745872] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.315 [2024-11-05 03:21:15.745895] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65152) is not found. Dropping the request. 00:10:52.573 03:21:15 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:10:52.573 03:21:15 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:10:52.573 03:21:15 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:52.573 03:21:15 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:52.573 03:21:15 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:52.573 03:21:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:52.573 ************************************ 00:10:52.573 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:52.573 ************************************ 00:10:52.573 03:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:52.573 * Looking for test storage... 00:10:52.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:52.573 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:52.573 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:10:52.573 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:52.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.833 --rc genhtml_branch_coverage=1 00:10:52.833 --rc genhtml_function_coverage=1 00:10:52.833 --rc genhtml_legend=1 00:10:52.833 --rc geninfo_all_blocks=1 00:10:52.833 --rc geninfo_unexecuted_blocks=1 00:10:52.833 00:10:52.833 ' 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:52.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.833 --rc genhtml_branch_coverage=1 00:10:52.833 --rc genhtml_function_coverage=1 00:10:52.833 --rc genhtml_legend=1 00:10:52.833 --rc geninfo_all_blocks=1 00:10:52.833 --rc geninfo_unexecuted_blocks=1 00:10:52.833 00:10:52.833 ' 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:52.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.833 --rc genhtml_branch_coverage=1 00:10:52.833 --rc genhtml_function_coverage=1 00:10:52.833 --rc genhtml_legend=1 00:10:52.833 --rc geninfo_all_blocks=1 00:10:52.833 --rc geninfo_unexecuted_blocks=1 00:10:52.833 00:10:52.833 ' 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:52.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.833 --rc genhtml_branch_coverage=1 00:10:52.833 --rc genhtml_function_coverage=1 00:10:52.833 --rc genhtml_legend=1 00:10:52.833 --rc geninfo_all_blocks=1 00:10:52.833 --rc geninfo_unexecuted_blocks=1 00:10:52.833 00:10:52.833 ' 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65442 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65442 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 65442 ']' 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.833 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:52.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.834 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.834 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:52.834 03:21:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:52.834 [2024-11-05 03:21:16.415045] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:10:52.834 [2024-11-05 03:21:16.415163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65442 ] 00:10:53.092 [2024-11-05 03:21:16.616622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.350 [2024-11-05 03:21:16.747387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.350 [2024-11-05 03:21:16.747558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.350 [2024-11-05 03:21:16.747755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.350 [2024-11-05 03:21:16.747792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:54.286 nvme0n1 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_4jUPQ.txt 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:54.286 true 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730776877 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65470 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:54.286 03:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:56.814 [2024-11-05 03:21:19.829781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:56.814 [2024-11-05 03:21:19.830160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:56.814 [2024-11-05 03:21:19.830191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:56.814 [2024-11-05 03:21:19.830208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.814 [2024-11-05 03:21:19.832702] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:56.814 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65470 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65470 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65470 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_4jUPQ.txt 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_4jUPQ.txt 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65442 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 65442 ']' 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 65442 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:56.814 03:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65442 00:10:56.814 03:21:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:56.814 03:21:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:56.815 killing process with pid 65442 00:10:56.815 03:21:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65442' 00:10:56.815 03:21:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 65442 00:10:56.815 03:21:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 65442 00:10:59.345 03:21:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:59.345 03:21:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:59.345 00:10:59.345 real 0m6.495s 00:10:59.345 user 0m22.566s 00:10:59.345 sys 0m0.849s 00:10:59.345 ************************************ 00:10:59.345 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:59.345 ************************************ 00:10:59.345 03:21:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:59.345 03:21:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:59.345 03:21:22 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:59.345 03:21:22 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:59.345 03:21:22 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:59.345 03:21:22 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:59.345 03:21:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:59.345 ************************************ 00:10:59.345 START TEST nvme_fio 00:10:59.345 ************************************ 00:10:59.345 03:21:22 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:10:59.345 03:21:22 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:59.345 03:21:22 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:59.345 03:21:22 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:59.345 03:21:22 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:59.345 03:21:22 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:10:59.345 03:21:22 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:59.345 03:21:22 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:59.345 03:21:22 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:59.345 03:21:22 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:59.345 03:21:22 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:59.345 03:21:22 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:59.345 03:21:22 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:59.345 03:21:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:59.345 03:21:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:59.345 03:21:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:59.345 03:21:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:59.345 03:21:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:59.967 03:21:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:59.967 03:21:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:59.967 03:21:23 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:59.967 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:59.967 fio-3.35 00:10:59.967 Starting 1 thread 00:11:03.257 00:11:03.257 test: (groupid=0, jobs=1): err= 0: pid=65623: Tue Nov 5 03:21:26 2024 00:11:03.257 read: IOPS=19.8k, BW=77.3MiB/s (81.0MB/s)(155MiB/2001msec) 00:11:03.257 slat (nsec): min=4209, max=89266, avg=5314.37, stdev=1537.06 00:11:03.257 clat (usec): min=201, max=11495, avg=3220.76, stdev=444.62 00:11:03.257 lat (usec): min=206, max=11584, avg=3226.07, stdev=445.27 00:11:03.257 clat percentiles (usec): 00:11:03.257 | 1.00th=[ 2900], 5.00th=[ 2999], 10.00th=[ 3032], 20.00th=[ 3064], 00:11:03.257 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3195], 00:11:03.257 | 70.00th=[ 3228], 80.00th=[ 3261], 90.00th=[ 3359], 95.00th=[ 3490], 00:11:03.257 | 99.00th=[ 5342], 99.50th=[ 5997], 99.90th=[ 8455], 99.95th=[ 9110], 00:11:03.257 | 99.99th=[11338] 00:11:03.257 bw ( KiB/s): min=77285, max=80624, per=99.14%, avg=78441.67, stdev=1891.09, samples=3 00:11:03.257 iops : min=19321, max=20156, avg=19610.33, stdev=472.85, samples=3 00:11:03.257 write: IOPS=19.7k, BW=77.1MiB/s (80.9MB/s)(154MiB/2001msec); 0 zone resets 00:11:03.257 slat (nsec): min=4344, max=82877, avg=5596.40, stdev=1507.13 00:11:03.257 clat (usec): min=220, max=11349, avg=3226.70, stdev=439.93 00:11:03.257 lat (usec): min=225, max=11363, avg=3232.30, stdev=440.54 00:11:03.257 clat percentiles (usec): 00:11:03.257 | 1.00th=[ 2900], 5.00th=[ 2999], 10.00th=[ 3032], 20.00th=[ 3064], 00:11:03.257 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3195], 00:11:03.257 | 70.00th=[ 3228], 80.00th=[ 3261], 90.00th=[ 3359], 95.00th=[ 3490], 00:11:03.257 | 99.00th=[ 5211], 99.50th=[ 5932], 99.90th=[ 8455], 99.95th=[ 9372], 00:11:03.257 | 99.99th=[11076] 00:11:03.257 bw ( KiB/s): min=77216, max=81016, per=99.47%, avg=78537.33, stdev=2148.15, samples=3 00:11:03.257 iops : min=19304, max=20254, avg=19634.33, stdev=537.04, samples=3 00:11:03.257 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:03.257 lat (msec) : 2=0.12%, 4=97.43%, 10=2.37%, 20=0.04% 00:11:03.257 cpu : usr=99.20%, sys=0.15%, ctx=3, majf=0, minf=606 00:11:03.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:03.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.257 issued rwts: total=39581,39498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.257 00:11:03.257 Run status group 0 (all jobs): 00:11:03.257 READ: bw=77.3MiB/s (81.0MB/s), 77.3MiB/s-77.3MiB/s (81.0MB/s-81.0MB/s), io=155MiB (162MB), run=2001-2001msec 00:11:03.257 WRITE: bw=77.1MiB/s (80.9MB/s), 77.1MiB/s-77.1MiB/s (80.9MB/s-80.9MB/s), io=154MiB (162MB), run=2001-2001msec 00:11:03.516 ----------------------------------------------------- 00:11:03.516 Suppressions used: 00:11:03.516 count bytes template 00:11:03.516 1 32 /usr/src/fio/parse.c 00:11:03.516 1 8 libtcmalloc_minimal.so 00:11:03.516 ----------------------------------------------------- 00:11:03.516 00:11:03.516 03:21:26 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:03.516 03:21:26 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:03.516 03:21:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:03.516 03:21:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:03.775 03:21:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:03.775 03:21:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:04.035 03:21:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:04.035 03:21:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:04.035 03:21:27 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:04.294 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:04.294 fio-3.35 00:11:04.294 Starting 1 thread 00:11:08.551 00:11:08.551 test: (groupid=0, jobs=1): err= 0: pid=65689: Tue Nov 5 03:21:31 2024 00:11:08.551 read: IOPS=22.6k, BW=88.2MiB/s (92.5MB/s)(176MiB/2001msec) 00:11:08.551 slat (nsec): min=3788, max=56855, avg=4348.73, stdev=1048.61 00:11:08.551 clat (usec): min=245, max=9868, avg=2827.08, stdev=239.60 00:11:08.551 lat (usec): min=249, max=9925, avg=2831.43, stdev=239.95 00:11:08.551 clat percentiles (usec): 00:11:08.551 | 1.00th=[ 2573], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:11:08.551 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:11:08.551 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3097], 00:11:08.551 | 99.00th=[ 3490], 99.50th=[ 3949], 99.90th=[ 5342], 99.95th=[ 7242], 00:11:08.551 | 99.99th=[ 9634] 00:11:08.551 bw ( KiB/s): min=89440, max=90712, per=99.76%, avg=90093.33, stdev=636.71, samples=3 00:11:08.551 iops : min=22360, max=22678, avg=22523.33, stdev=159.18, samples=3 00:11:08.551 write: IOPS=22.5k, BW=87.7MiB/s (92.0MB/s)(176MiB/2001msec); 0 zone resets 00:11:08.551 slat (nsec): min=3859, max=31381, avg=4669.00, stdev=1036.44 00:11:08.551 clat (usec): min=216, max=9712, avg=2831.63, stdev=246.06 00:11:08.551 lat (usec): min=221, max=9727, avg=2836.30, stdev=246.35 00:11:08.551 clat percentiles (usec): 00:11:08.551 | 1.00th=[ 2573], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:11:08.551 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:11:08.551 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3130], 00:11:08.551 | 99.00th=[ 3523], 99.50th=[ 4047], 99.90th=[ 5866], 99.95th=[ 7504], 00:11:08.551 | 99.99th=[ 9241] 00:11:08.551 bw ( KiB/s): min=88960, max=92136, per=100.00%, avg=90301.33, stdev=1644.47, samples=3 00:11:08.551 iops : min=22240, max=23034, avg=22575.33, stdev=411.12, samples=3 00:11:08.551 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:08.551 lat (msec) : 2=0.10%, 4=99.36%, 10=0.51% 00:11:08.551 cpu : usr=99.50%, sys=0.05%, ctx=3, majf=0, minf=606 00:11:08.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:08.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.551 issued rwts: total=45178,44938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.551 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.551 00:11:08.551 Run status group 0 (all jobs): 00:11:08.551 READ: bw=88.2MiB/s (92.5MB/s), 88.2MiB/s-88.2MiB/s (92.5MB/s-92.5MB/s), io=176MiB (185MB), run=2001-2001msec 00:11:08.551 WRITE: bw=87.7MiB/s (92.0MB/s), 87.7MiB/s-87.7MiB/s (92.0MB/s-92.0MB/s), io=176MiB (184MB), run=2001-2001msec 00:11:08.551 ----------------------------------------------------- 00:11:08.551 Suppressions used: 00:11:08.551 count bytes template 00:11:08.551 1 32 /usr/src/fio/parse.c 00:11:08.551 1 8 libtcmalloc_minimal.so 00:11:08.551 ----------------------------------------------------- 00:11:08.551 00:11:08.551 03:21:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:08.551 03:21:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:08.551 03:21:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:08.551 03:21:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:08.551 03:21:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:08.551 03:21:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:08.810 03:21:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:08.810 03:21:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:08.810 03:21:32 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:08.810 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:08.810 fio-3.35 00:11:08.810 Starting 1 thread 00:11:13.003 00:11:13.003 test: (groupid=0, jobs=1): err= 0: pid=65755: Tue Nov 5 03:21:36 2024 00:11:13.003 read: IOPS=20.5k, BW=80.0MiB/s (83.9MB/s)(160MiB/2001msec) 00:11:13.003 slat (nsec): min=4227, max=58324, avg=5256.22, stdev=1234.04 00:11:13.003 clat (usec): min=234, max=14272, avg=3109.63, stdev=419.55 00:11:13.003 lat (usec): min=239, max=14330, avg=3114.89, stdev=419.95 00:11:13.003 clat percentiles (usec): 00:11:13.003 | 1.00th=[ 2573], 5.00th=[ 2835], 10.00th=[ 2900], 20.00th=[ 2966], 00:11:13.003 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3130], 00:11:13.003 | 70.00th=[ 3163], 80.00th=[ 3195], 90.00th=[ 3261], 95.00th=[ 3359], 00:11:13.003 | 99.00th=[ 4359], 99.50th=[ 5473], 99.90th=[ 8979], 99.95th=[11731], 00:11:13.003 | 99.99th=[13960] 00:11:13.003 bw ( KiB/s): min=78072, max=86360, per=100.00%, avg=82085.33, stdev=4150.18, samples=3 00:11:13.003 iops : min=19518, max=21590, avg=20521.33, stdev=1037.54, samples=3 00:11:13.003 write: IOPS=20.4k, BW=79.8MiB/s (83.7MB/s)(160MiB/2001msec); 0 zone resets 00:11:13.003 slat (nsec): min=4334, max=39836, avg=5502.24, stdev=1169.61 00:11:13.003 clat (usec): min=388, max=14105, avg=3117.25, stdev=430.71 00:11:13.003 lat (usec): min=394, max=14117, avg=3122.75, stdev=431.05 00:11:13.003 clat percentiles (usec): 00:11:13.003 | 1.00th=[ 2573], 5.00th=[ 2835], 10.00th=[ 2900], 20.00th=[ 2966], 00:11:13.003 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3130], 00:11:13.003 | 70.00th=[ 3163], 80.00th=[ 3195], 90.00th=[ 3261], 95.00th=[ 3392], 00:11:13.003 | 99.00th=[ 4424], 99.50th=[ 5538], 99.90th=[10028], 99.95th=[11994], 00:11:13.003 | 99.99th=[13698] 00:11:13.003 bw ( KiB/s): min=77912, max=86352, per=100.00%, avg=82149.33, stdev=4220.11, samples=3 00:11:13.003 iops : min=19478, max=21588, avg=20537.33, stdev=1055.03, samples=3 00:11:13.003 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:13.003 lat (msec) : 2=0.36%, 4=98.21%, 10=1.30%, 20=0.09% 00:11:13.003 cpu : usr=99.35%, sys=0.05%, ctx=4, majf=0, minf=606 00:11:13.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:13.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.003 issued rwts: total=40989,40895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.003 00:11:13.003 Run status group 0 (all jobs): 00:11:13.003 READ: bw=80.0MiB/s (83.9MB/s), 80.0MiB/s-80.0MiB/s (83.9MB/s-83.9MB/s), io=160MiB (168MB), run=2001-2001msec 00:11:13.003 WRITE: bw=79.8MiB/s (83.7MB/s), 79.8MiB/s-79.8MiB/s (83.7MB/s-83.7MB/s), io=160MiB (168MB), run=2001-2001msec 00:11:13.003 ----------------------------------------------------- 00:11:13.003 Suppressions used: 00:11:13.003 count bytes template 00:11:13.003 1 32 /usr/src/fio/parse.c 00:11:13.003 1 8 libtcmalloc_minimal.so 00:11:13.003 ----------------------------------------------------- 00:11:13.003 00:11:13.003 03:21:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:13.003 03:21:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:13.003 03:21:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:13.003 03:21:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:13.284 03:21:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:13.284 03:21:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:13.555 03:21:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:13.555 03:21:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:13.555 03:21:36 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:13.555 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:13.555 fio-3.35 00:11:13.555 Starting 1 thread 00:11:20.133 00:11:20.133 test: (groupid=0, jobs=1): err= 0: pid=65821: Tue Nov 5 03:21:42 2024 00:11:20.133 read: IOPS=23.1k, BW=90.1MiB/s (94.5MB/s)(180MiB/2001msec) 00:11:20.133 slat (nsec): min=3747, max=50815, avg=4220.69, stdev=919.25 00:11:20.133 clat (usec): min=279, max=10820, avg=2766.15, stdev=305.85 00:11:20.133 lat (usec): min=283, max=10871, avg=2770.37, stdev=306.26 00:11:20.133 clat percentiles (usec): 00:11:20.133 | 1.00th=[ 2343], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2638], 00:11:20.133 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:11:20.133 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2900], 95.00th=[ 3032], 00:11:20.133 | 99.00th=[ 3916], 99.50th=[ 4490], 99.90th=[ 5932], 99.95th=[ 8356], 00:11:20.133 | 99.99th=[10552] 00:11:20.133 bw ( KiB/s): min=89269, max=94144, per=99.61%, avg=91895.00, stdev=2459.27, samples=3 00:11:20.133 iops : min=22317, max=23536, avg=22973.67, stdev=614.95, samples=3 00:11:20.133 write: IOPS=22.9k, BW=89.6MiB/s (93.9MB/s)(179MiB/2001msec); 0 zone resets 00:11:20.133 slat (nsec): min=3854, max=37053, avg=4624.61, stdev=953.10 00:11:20.133 clat (usec): min=182, max=10698, avg=2774.50, stdev=317.42 00:11:20.133 lat (usec): min=187, max=10710, avg=2779.13, stdev=317.81 00:11:20.133 clat percentiles (usec): 00:11:20.133 | 1.00th=[ 2343], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2671], 00:11:20.133 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:11:20.133 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2900], 95.00th=[ 3032], 00:11:20.133 | 99.00th=[ 4047], 99.50th=[ 4555], 99.90th=[ 6259], 99.95th=[ 8717], 00:11:20.133 | 99.99th=[10290] 00:11:20.133 bw ( KiB/s): min=88806, max=93920, per=100.00%, avg=91994.00, stdev=2780.78, samples=3 00:11:20.133 iops : min=22201, max=23480, avg=22998.33, stdev=695.48, samples=3 00:11:20.133 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:20.133 lat (msec) : 2=0.40%, 4=98.58%, 10=0.96%, 20=0.02% 00:11:20.133 cpu : usr=99.55%, sys=0.00%, ctx=3, majf=0, minf=604 00:11:20.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:20.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.133 issued rwts: total=46150,45891,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.133 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.133 00:11:20.133 Run status group 0 (all jobs): 00:11:20.133 READ: bw=90.1MiB/s (94.5MB/s), 90.1MiB/s-90.1MiB/s (94.5MB/s-94.5MB/s), io=180MiB (189MB), run=2001-2001msec 00:11:20.133 WRITE: bw=89.6MiB/s (93.9MB/s), 89.6MiB/s-89.6MiB/s (93.9MB/s-93.9MB/s), io=179MiB (188MB), run=2001-2001msec 00:11:20.133 ----------------------------------------------------- 00:11:20.133 Suppressions used: 00:11:20.133 count bytes template 00:11:20.133 1 32 /usr/src/fio/parse.c 00:11:20.133 1 8 libtcmalloc_minimal.so 00:11:20.133 ----------------------------------------------------- 00:11:20.133 00:11:20.133 03:21:42 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:20.133 03:21:42 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:20.133 00:11:20.133 real 0m20.333s 00:11:20.133 user 0m15.050s 00:11:20.133 sys 0m6.305s 00:11:20.133 03:21:42 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:20.133 ************************************ 00:11:20.133 END TEST nvme_fio 00:11:20.133 ************************************ 00:11:20.133 03:21:42 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:20.133 00:11:20.133 real 1m35.873s 00:11:20.133 user 3m42.811s 00:11:20.133 sys 0m26.665s 00:11:20.133 03:21:42 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:20.133 ************************************ 00:11:20.133 END TEST nvme 00:11:20.133 ************************************ 00:11:20.133 03:21:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.133 03:21:42 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:11:20.133 03:21:42 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:20.133 03:21:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:20.133 03:21:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:20.134 03:21:42 -- common/autotest_common.sh@10 -- # set +x 00:11:20.134 ************************************ 00:11:20.134 START TEST nvme_scc 00:11:20.134 ************************************ 00:11:20.134 03:21:42 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:20.134 * Looking for test storage... 00:11:20.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:20.134 03:21:43 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:20.134 03:21:43 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:20.134 03:21:43 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:20.134 03:21:43 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@345 -- # : 1 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@368 -- # return 0 00:11:20.134 03:21:43 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.134 03:21:43 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:20.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.134 --rc genhtml_branch_coverage=1 00:11:20.134 --rc genhtml_function_coverage=1 00:11:20.134 --rc genhtml_legend=1 00:11:20.134 --rc geninfo_all_blocks=1 00:11:20.134 --rc geninfo_unexecuted_blocks=1 00:11:20.134 00:11:20.134 ' 00:11:20.134 03:21:43 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:20.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.134 --rc genhtml_branch_coverage=1 00:11:20.134 --rc genhtml_function_coverage=1 00:11:20.134 --rc genhtml_legend=1 00:11:20.134 --rc geninfo_all_blocks=1 00:11:20.134 --rc geninfo_unexecuted_blocks=1 00:11:20.134 00:11:20.134 ' 00:11:20.134 03:21:43 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:20.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.134 --rc genhtml_branch_coverage=1 00:11:20.134 --rc genhtml_function_coverage=1 00:11:20.134 --rc genhtml_legend=1 00:11:20.134 --rc geninfo_all_blocks=1 00:11:20.134 --rc geninfo_unexecuted_blocks=1 00:11:20.134 00:11:20.134 ' 00:11:20.134 03:21:43 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:20.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.134 --rc genhtml_branch_coverage=1 00:11:20.134 --rc genhtml_function_coverage=1 00:11:20.134 --rc genhtml_legend=1 00:11:20.134 --rc geninfo_all_blocks=1 00:11:20.134 --rc geninfo_unexecuted_blocks=1 00:11:20.134 00:11:20.134 ' 00:11:20.134 03:21:43 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:20.134 03:21:43 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:20.134 03:21:43 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:20.134 03:21:43 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:20.134 03:21:43 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.134 03:21:43 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.134 03:21:43 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.134 03:21:43 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.134 03:21:43 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.134 03:21:43 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:20.134 03:21:43 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.134 03:21:43 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:20.134 03:21:43 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:20.134 03:21:43 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:20.134 03:21:43 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:20.134 03:21:43 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:20.134 03:21:43 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:20.134 03:21:43 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:20.134 03:21:43 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:20.134 03:21:43 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:20.134 03:21:43 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.134 03:21:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:20.134 03:21:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:20.134 03:21:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:20.134 03:21:43 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:20.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:20.654 Waiting for block devices as requested 00:11:20.654 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:20.654 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:20.913 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:20.913 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:26.200 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:26.200 03:21:49 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:26.200 03:21:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:26.200 03:21:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:26.200 03:21:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:26.200 03:21:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:26.200 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:26.201 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.202 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.203 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.204 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:26.205 03:21:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:26.205 03:21:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:26.205 03:21:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:26.205 03:21:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.205 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.206 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.207 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.208 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:26.209 03:21:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:26.209 03:21:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:26.209 03:21:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:26.209 03:21:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:26.209 03:21:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:26.210 03:21:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:26.210 03:21:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:26.210 03:21:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:26.210 03:21:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:26.210 03:21:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:26.210 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.210 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.210 03:21:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.476 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.477 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.478 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:26.479 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.480 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.481 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.482 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:26.483 03:21:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:26.483 03:21:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:26.483 03:21:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:26.483 03:21:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:26.483 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.484 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:26.485 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:26.486 03:21:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:11:26.486 03:21:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:11:26.487 03:21:49 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:11:26.487 03:21:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:26.487 03:21:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:26.487 03:21:49 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:27.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:27.999 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:27.999 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:27.999 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:27.999 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:28.258 03:21:51 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:28.258 03:21:51 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:28.258 03:21:51 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:28.258 03:21:51 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:28.258 ************************************ 00:11:28.258 START TEST nvme_simple_copy 00:11:28.258 ************************************ 00:11:28.258 03:21:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:28.517 Initializing NVMe Controllers 00:11:28.517 Attaching to 0000:00:10.0 00:11:28.517 Controller supports SCC. Attached to 0000:00:10.0 00:11:28.517 Namespace ID: 1 size: 6GB 00:11:28.517 Initialization complete. 00:11:28.517 00:11:28.517 Controller QEMU NVMe Ctrl (12340 ) 00:11:28.517 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:28.517 Namespace Block Size:4096 00:11:28.517 Writing LBAs 0 to 63 with Random Data 00:11:28.517 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:28.517 LBAs matching Written Data: 64 00:11:28.517 00:11:28.517 real 0m0.320s 00:11:28.517 user 0m0.106s 00:11:28.517 sys 0m0.111s 00:11:28.517 03:21:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:28.517 03:21:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:28.517 ************************************ 00:11:28.517 END TEST nvme_simple_copy 00:11:28.517 ************************************ 00:11:28.517 00:11:28.517 real 0m9.058s 00:11:28.517 user 0m1.500s 00:11:28.517 sys 0m2.631s 00:11:28.517 03:21:52 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:28.517 03:21:52 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:28.517 ************************************ 00:11:28.517 END TEST nvme_scc 00:11:28.517 ************************************ 00:11:28.517 03:21:52 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:28.517 03:21:52 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:11:28.517 03:21:52 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:11:28.518 03:21:52 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:11:28.518 03:21:52 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:28.518 03:21:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:28.518 03:21:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:28.518 03:21:52 -- common/autotest_common.sh@10 -- # set +x 00:11:28.777 ************************************ 00:11:28.777 START TEST nvme_fdp 00:11:28.777 ************************************ 00:11:28.777 03:21:52 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:11:28.777 * Looking for test storage... 00:11:28.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:28.777 03:21:52 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:28.777 03:21:52 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:11:28.777 03:21:52 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:28.777 03:21:52 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.777 03:21:52 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:11:28.777 03:21:52 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.777 03:21:52 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:28.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.777 --rc genhtml_branch_coverage=1 00:11:28.777 --rc genhtml_function_coverage=1 00:11:28.777 --rc genhtml_legend=1 00:11:28.777 --rc geninfo_all_blocks=1 00:11:28.777 --rc geninfo_unexecuted_blocks=1 00:11:28.777 00:11:28.777 ' 00:11:28.777 03:21:52 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:28.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.777 --rc genhtml_branch_coverage=1 00:11:28.777 --rc genhtml_function_coverage=1 00:11:28.777 --rc genhtml_legend=1 00:11:28.777 --rc geninfo_all_blocks=1 00:11:28.777 --rc geninfo_unexecuted_blocks=1 00:11:28.777 00:11:28.777 ' 00:11:28.777 03:21:52 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:28.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.778 --rc genhtml_branch_coverage=1 00:11:28.778 --rc genhtml_function_coverage=1 00:11:28.778 --rc genhtml_legend=1 00:11:28.778 --rc geninfo_all_blocks=1 00:11:28.778 --rc geninfo_unexecuted_blocks=1 00:11:28.778 00:11:28.778 ' 00:11:28.778 03:21:52 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:28.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.778 --rc genhtml_branch_coverage=1 00:11:28.778 --rc genhtml_function_coverage=1 00:11:28.778 --rc genhtml_legend=1 00:11:28.778 --rc geninfo_all_blocks=1 00:11:28.778 --rc geninfo_unexecuted_blocks=1 00:11:28.778 00:11:28.778 ' 00:11:28.778 03:21:52 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:28.778 03:21:52 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:28.778 03:21:52 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:28.778 03:21:52 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:28.778 03:21:52 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:28.778 03:21:52 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.778 03:21:52 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.778 03:21:52 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.778 03:21:52 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.778 03:21:52 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.778 03:21:52 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.778 03:21:52 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.778 03:21:52 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:28.778 03:21:52 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.778 03:21:52 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:28.778 03:21:52 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:28.778 03:21:52 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:28.778 03:21:52 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:28.778 03:21:52 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:28.778 03:21:52 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:28.778 03:21:52 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:29.037 03:21:52 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:29.037 03:21:52 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:29.037 03:21:52 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:29.037 03:21:52 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:29.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:29.865 Waiting for block devices as requested 00:11:29.865 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:29.865 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:30.124 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:30.124 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.412 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:35.412 03:21:58 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:35.412 03:21:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:35.412 03:21:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:35.412 03:21:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:35.412 03:21:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.412 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.413 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.414 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.415 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.416 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:35.417 03:21:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:35.417 03:21:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:35.417 03:21:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:35.417 03:21:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:35.417 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:35.418 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.419 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.420 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:35.421 03:21:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:35.421 03:21:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:35.421 03:21:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:35.421 03:21:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:35.421 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.422 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.423 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.424 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.425 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.426 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.427 03:21:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.428 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:35.429 03:21:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:35.429 03:21:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:35.429 03:21:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:35.429 03:21:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.429 03:21:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:35.711 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.712 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:35.713 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:35.714 03:21:59 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:35.714 03:21:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:35.715 03:21:59 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:35.715 03:21:59 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:35.715 03:21:59 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:35.715 03:21:59 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:36.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:37.218 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:37.218 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:37.218 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:37.218 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:37.218 03:22:00 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:37.218 03:22:00 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:37.218 03:22:00 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:37.218 03:22:00 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:37.218 ************************************ 00:11:37.218 START TEST nvme_flexible_data_placement 00:11:37.218 ************************************ 00:11:37.218 03:22:00 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:37.478 Initializing NVMe Controllers 00:11:37.478 Attaching to 0000:00:13.0 00:11:37.478 Controller supports FDP Attached to 0000:00:13.0 00:11:37.478 Namespace ID: 1 Endurance Group ID: 1 00:11:37.478 Initialization complete. 00:11:37.478 00:11:37.478 ================================== 00:11:37.478 == FDP tests for Namespace: #01 == 00:11:37.478 ================================== 00:11:37.478 00:11:37.478 Get Feature: FDP: 00:11:37.478 ================= 00:11:37.478 Enabled: Yes 00:11:37.478 FDP configuration Index: 0 00:11:37.478 00:11:37.478 FDP configurations log page 00:11:37.478 =========================== 00:11:37.478 Number of FDP configurations: 1 00:11:37.478 Version: 0 00:11:37.478 Size: 112 00:11:37.478 FDP Configuration Descriptor: 0 00:11:37.478 Descriptor Size: 96 00:11:37.478 Reclaim Group Identifier format: 2 00:11:37.478 FDP Volatile Write Cache: Not Present 00:11:37.478 FDP Configuration: Valid 00:11:37.478 Vendor Specific Size: 0 00:11:37.478 Number of Reclaim Groups: 2 00:11:37.478 Number of Recalim Unit Handles: 8 00:11:37.478 Max Placement Identifiers: 128 00:11:37.478 Number of Namespaces Suppprted: 256 00:11:37.478 Reclaim unit Nominal Size: 6000000 bytes 00:11:37.478 Estimated Reclaim Unit Time Limit: Not Reported 00:11:37.478 RUH Desc #000: RUH Type: Initially Isolated 00:11:37.478 RUH Desc #001: RUH Type: Initially Isolated 00:11:37.478 RUH Desc #002: RUH Type: Initially Isolated 00:11:37.478 RUH Desc #003: RUH Type: Initially Isolated 00:11:37.478 RUH Desc #004: RUH Type: Initially Isolated 00:11:37.478 RUH Desc #005: RUH Type: Initially Isolated 00:11:37.478 RUH Desc #006: RUH Type: Initially Isolated 00:11:37.478 RUH Desc #007: RUH Type: Initially Isolated 00:11:37.478 00:11:37.478 FDP reclaim unit handle usage log page 00:11:37.478 ====================================== 00:11:37.478 Number of Reclaim Unit Handles: 8 00:11:37.478 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:37.478 RUH Usage Desc #001: RUH Attributes: Unused 00:11:37.478 RUH Usage Desc #002: RUH Attributes: Unused 00:11:37.478 RUH Usage Desc #003: RUH Attributes: Unused 00:11:37.478 RUH Usage Desc #004: RUH Attributes: Unused 00:11:37.478 RUH Usage Desc #005: RUH Attributes: Unused 00:11:37.478 RUH Usage Desc #006: RUH Attributes: Unused 00:11:37.478 RUH Usage Desc #007: RUH Attributes: Unused 00:11:37.478 00:11:37.478 FDP statistics log page 00:11:37.478 ======================= 00:11:37.478 Host bytes with metadata written: 1019142144 00:11:37.478 Media bytes with metadata written: 1019338752 00:11:37.478 Media bytes erased: 0 00:11:37.478 00:11:37.478 FDP Reclaim unit handle status 00:11:37.478 ============================== 00:11:37.478 Number of RUHS descriptors: 2 00:11:37.478 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005412 00:11:37.478 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:37.478 00:11:37.478 FDP write on placement id: 0 success 00:11:37.478 00:11:37.478 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:37.478 00:11:37.478 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:37.478 00:11:37.478 Get Feature: FDP Events for Placement handle: #0 00:11:37.478 ======================== 00:11:37.478 Number of FDP Events: 6 00:11:37.478 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:37.478 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:37.478 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:37.478 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:37.478 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:37.478 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:37.478 00:11:37.478 FDP events log page 00:11:37.478 =================== 00:11:37.478 Number of FDP events: 1 00:11:37.478 FDP Event #0: 00:11:37.478 Event Type: RU Not Written to Capacity 00:11:37.478 Placement Identifier: Valid 00:11:37.478 NSID: Valid 00:11:37.478 Location: Valid 00:11:37.478 Placement Identifier: 0 00:11:37.478 Event Timestamp: 8 00:11:37.478 Namespace Identifier: 1 00:11:37.478 Reclaim Group Identifier: 0 00:11:37.478 Reclaim Unit Handle Identifier: 0 00:11:37.478 00:11:37.478 FDP test passed 00:11:37.478 00:11:37.478 real 0m0.290s 00:11:37.478 user 0m0.087s 00:11:37.478 sys 0m0.103s 00:11:37.478 ************************************ 00:11:37.478 END TEST nvme_flexible_data_placement 00:11:37.478 ************************************ 00:11:37.478 03:22:00 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:37.478 03:22:00 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:37.478 ************************************ 00:11:37.478 END TEST nvme_fdp 00:11:37.478 ************************************ 00:11:37.478 00:11:37.478 real 0m8.945s 00:11:37.478 user 0m1.520s 00:11:37.478 sys 0m2.523s 00:11:37.478 03:22:01 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:37.478 03:22:01 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:37.736 03:22:01 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:37.736 03:22:01 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:37.736 03:22:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:37.736 03:22:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:37.736 03:22:01 -- common/autotest_common.sh@10 -- # set +x 00:11:37.736 ************************************ 00:11:37.736 START TEST nvme_rpc 00:11:37.737 ************************************ 00:11:37.737 03:22:01 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:37.737 * Looking for test storage... 00:11:37.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:37.737 03:22:01 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:37.737 03:22:01 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:37.737 03:22:01 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:37.996 03:22:01 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.996 03:22:01 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:37.996 03:22:01 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.996 03:22:01 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:37.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.996 --rc genhtml_branch_coverage=1 00:11:37.996 --rc genhtml_function_coverage=1 00:11:37.996 --rc genhtml_legend=1 00:11:37.996 --rc geninfo_all_blocks=1 00:11:37.996 --rc geninfo_unexecuted_blocks=1 00:11:37.996 00:11:37.996 ' 00:11:37.996 03:22:01 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:37.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.996 --rc genhtml_branch_coverage=1 00:11:37.996 --rc genhtml_function_coverage=1 00:11:37.996 --rc genhtml_legend=1 00:11:37.996 --rc geninfo_all_blocks=1 00:11:37.996 --rc geninfo_unexecuted_blocks=1 00:11:37.996 00:11:37.996 ' 00:11:37.996 03:22:01 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:37.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.996 --rc genhtml_branch_coverage=1 00:11:37.996 --rc genhtml_function_coverage=1 00:11:37.996 --rc genhtml_legend=1 00:11:37.996 --rc geninfo_all_blocks=1 00:11:37.996 --rc geninfo_unexecuted_blocks=1 00:11:37.996 00:11:37.996 ' 00:11:37.996 03:22:01 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:37.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.996 --rc genhtml_branch_coverage=1 00:11:37.996 --rc genhtml_function_coverage=1 00:11:37.996 --rc genhtml_legend=1 00:11:37.996 --rc geninfo_all_blocks=1 00:11:37.996 --rc geninfo_unexecuted_blocks=1 00:11:37.996 00:11:37.996 ' 00:11:37.996 03:22:01 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:37.996 03:22:01 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:37.996 03:22:01 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:11:37.996 03:22:01 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:11:37.997 03:22:01 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:37.997 03:22:01 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67211 00:11:37.997 03:22:01 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:37.997 03:22:01 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:37.997 03:22:01 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67211 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 67211 ']' 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:37.997 03:22:01 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.256 [2024-11-05 03:22:01.588785] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:11:38.256 [2024-11-05 03:22:01.589070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67211 ] 00:11:38.256 [2024-11-05 03:22:01.767308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:38.515 [2024-11-05 03:22:01.881599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.515 [2024-11-05 03:22:01.881634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.453 03:22:02 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:39.453 03:22:02 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:39.453 03:22:02 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:39.453 Nvme0n1 00:11:39.453 03:22:03 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:39.453 03:22:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:39.711 request: 00:11:39.711 { 00:11:39.711 "bdev_name": "Nvme0n1", 00:11:39.711 "filename": "non_existing_file", 00:11:39.711 "method": "bdev_nvme_apply_firmware", 00:11:39.711 "req_id": 1 00:11:39.712 } 00:11:39.712 Got JSON-RPC error response 00:11:39.712 response: 00:11:39.712 { 00:11:39.712 "code": -32603, 00:11:39.712 "message": "open file failed." 00:11:39.712 } 00:11:39.712 03:22:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:39.712 03:22:03 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:39.712 03:22:03 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:39.971 03:22:03 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:39.971 03:22:03 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67211 00:11:39.971 03:22:03 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 67211 ']' 00:11:39.971 03:22:03 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 67211 00:11:39.971 03:22:03 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:11:39.971 03:22:03 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:39.971 03:22:03 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67211 00:11:39.971 killing process with pid 67211 00:11:39.971 03:22:03 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:39.971 03:22:03 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:39.971 03:22:03 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67211' 00:11:39.971 03:22:03 nvme_rpc -- common/autotest_common.sh@971 -- # kill 67211 00:11:39.971 03:22:03 nvme_rpc -- common/autotest_common.sh@976 -- # wait 67211 00:11:42.507 ************************************ 00:11:42.507 END TEST nvme_rpc 00:11:42.507 ************************************ 00:11:42.507 00:11:42.507 real 0m4.624s 00:11:42.507 user 0m8.435s 00:11:42.507 sys 0m0.781s 00:11:42.507 03:22:05 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:42.507 03:22:05 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.507 03:22:05 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:42.507 03:22:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:42.507 03:22:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:42.507 03:22:05 -- common/autotest_common.sh@10 -- # set +x 00:11:42.507 ************************************ 00:11:42.507 START TEST nvme_rpc_timeouts 00:11:42.507 ************************************ 00:11:42.507 03:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:42.507 * Looking for test storage... 00:11:42.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:42.507 03:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:42.507 03:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:11:42.507 03:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:42.507 03:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.507 03:22:06 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:11:42.507 03:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.507 03:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:42.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.507 --rc genhtml_branch_coverage=1 00:11:42.507 --rc genhtml_function_coverage=1 00:11:42.507 --rc genhtml_legend=1 00:11:42.507 --rc geninfo_all_blocks=1 00:11:42.507 --rc geninfo_unexecuted_blocks=1 00:11:42.507 00:11:42.507 ' 00:11:42.507 03:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:42.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.507 --rc genhtml_branch_coverage=1 00:11:42.507 --rc genhtml_function_coverage=1 00:11:42.507 --rc genhtml_legend=1 00:11:42.507 --rc geninfo_all_blocks=1 00:11:42.507 --rc geninfo_unexecuted_blocks=1 00:11:42.507 00:11:42.507 ' 00:11:42.507 03:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:42.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.507 --rc genhtml_branch_coverage=1 00:11:42.507 --rc genhtml_function_coverage=1 00:11:42.507 --rc genhtml_legend=1 00:11:42.507 --rc geninfo_all_blocks=1 00:11:42.507 --rc geninfo_unexecuted_blocks=1 00:11:42.507 00:11:42.507 ' 00:11:42.507 03:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:42.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.507 --rc genhtml_branch_coverage=1 00:11:42.507 --rc genhtml_function_coverage=1 00:11:42.507 --rc genhtml_legend=1 00:11:42.507 --rc geninfo_all_blocks=1 00:11:42.507 --rc geninfo_unexecuted_blocks=1 00:11:42.507 00:11:42.507 ' 00:11:42.507 03:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.507 03:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67290 00:11:42.507 03:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67290 00:11:42.507 03:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67322 00:11:42.507 03:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:42.507 03:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:42.507 03:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67322 00:11:42.507 03:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 67322 ']' 00:11:42.507 03:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.507 03:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:42.507 03:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.507 03:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:42.507 03:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:42.767 [2024-11-05 03:22:06.168522] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:11:42.767 [2024-11-05 03:22:06.168882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67322 ] 00:11:43.026 [2024-11-05 03:22:06.351811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:43.026 [2024-11-05 03:22:06.471151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.026 [2024-11-05 03:22:06.471186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.963 Checking default timeout settings: 00:11:43.963 03:22:07 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:43.963 03:22:07 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:11:43.963 03:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:43.963 03:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:44.223 Making settings changes with rpc: 00:11:44.223 03:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:44.223 03:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:44.482 Check default vs. modified settings: 00:11:44.482 03:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:44.482 03:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67290 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67290 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:44.741 Setting action_on_timeout is changed as expected. 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67290 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67290 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:44.741 Setting timeout_us is changed as expected. 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67290 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67290 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:44.741 Setting timeout_admin_us is changed as expected. 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67290 /tmp/settings_modified_67290 00:11:44.741 03:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67322 00:11:44.741 03:22:08 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 67322 ']' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 67322 00:11:44.741 03:22:08 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:11:44.741 03:22:08 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:44.741 03:22:08 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67322 00:11:45.000 killing process with pid 67322 00:11:45.000 03:22:08 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:45.000 03:22:08 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:45.000 03:22:08 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67322' 00:11:45.000 03:22:08 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 67322 00:11:45.000 03:22:08 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 67322 00:11:47.536 RPC TIMEOUT SETTING TEST PASSED. 00:11:47.536 03:22:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:47.536 00:11:47.536 real 0m4.908s 00:11:47.536 user 0m9.231s 00:11:47.536 sys 0m0.803s 00:11:47.536 ************************************ 00:11:47.536 END TEST nvme_rpc_timeouts 00:11:47.536 ************************************ 00:11:47.536 03:22:10 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:47.536 03:22:10 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:47.536 03:22:10 -- spdk/autotest.sh@239 -- # uname -s 00:11:47.536 03:22:10 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:11:47.536 03:22:10 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:47.536 03:22:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:47.536 03:22:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:47.536 03:22:10 -- common/autotest_common.sh@10 -- # set +x 00:11:47.536 ************************************ 00:11:47.536 START TEST sw_hotplug 00:11:47.536 ************************************ 00:11:47.536 03:22:10 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:47.536 * Looking for test storage... 00:11:47.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:47.536 03:22:10 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:47.536 03:22:10 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:11:47.536 03:22:10 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:47.536 03:22:11 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.536 03:22:11 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:11:47.536 03:22:11 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.536 03:22:11 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:47.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.536 --rc genhtml_branch_coverage=1 00:11:47.536 --rc genhtml_function_coverage=1 00:11:47.536 --rc genhtml_legend=1 00:11:47.536 --rc geninfo_all_blocks=1 00:11:47.536 --rc geninfo_unexecuted_blocks=1 00:11:47.536 00:11:47.536 ' 00:11:47.536 03:22:11 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:47.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.536 --rc genhtml_branch_coverage=1 00:11:47.536 --rc genhtml_function_coverage=1 00:11:47.536 --rc genhtml_legend=1 00:11:47.536 --rc geninfo_all_blocks=1 00:11:47.536 --rc geninfo_unexecuted_blocks=1 00:11:47.536 00:11:47.536 ' 00:11:47.536 03:22:11 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:47.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.536 --rc genhtml_branch_coverage=1 00:11:47.536 --rc genhtml_function_coverage=1 00:11:47.536 --rc genhtml_legend=1 00:11:47.536 --rc geninfo_all_blocks=1 00:11:47.536 --rc geninfo_unexecuted_blocks=1 00:11:47.536 00:11:47.536 ' 00:11:47.536 03:22:11 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:47.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.536 --rc genhtml_branch_coverage=1 00:11:47.536 --rc genhtml_function_coverage=1 00:11:47.536 --rc genhtml_legend=1 00:11:47.536 --rc geninfo_all_blocks=1 00:11:47.536 --rc geninfo_unexecuted_blocks=1 00:11:47.536 00:11:47.536 ' 00:11:47.536 03:22:11 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:48.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:48.363 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:48.363 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:48.363 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:48.363 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:48.363 03:22:11 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:48.363 03:22:11 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:48.363 03:22:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:48.363 03:22:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@233 -- # local class 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:11:48.364 03:22:11 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:48.364 03:22:11 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:48.364 03:22:11 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:48.364 03:22:11 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:48.932 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:49.191 Waiting for block devices as requested 00:11:49.450 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:49.450 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:49.450 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:49.720 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:55.022 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:55.022 03:22:18 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:55.022 03:22:18 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:55.280 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:55.540 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:55.540 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:55.799 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:56.366 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:56.366 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:56.366 03:22:19 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:56.366 03:22:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.366 03:22:19 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:56.366 03:22:19 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:56.366 03:22:19 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68215 00:11:56.366 03:22:19 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:56.366 03:22:19 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:56.366 03:22:19 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:56.366 03:22:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:56.366 03:22:19 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:56.366 03:22:19 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:56.366 03:22:19 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:56.366 03:22:19 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:56.366 03:22:19 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:11:56.366 03:22:19 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:56.366 03:22:19 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:56.366 03:22:19 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:56.366 03:22:19 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:56.366 03:22:19 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:56.625 Initializing NVMe Controllers 00:11:56.625 Attaching to 0000:00:10.0 00:11:56.625 Attaching to 0000:00:11.0 00:11:56.625 Attached to 0000:00:10.0 00:11:56.625 Attached to 0000:00:11.0 00:11:56.625 Initialization complete. Starting I/O... 00:11:56.625 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:56.625 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:56.625 00:11:58.002 QEMU NVMe Ctrl (12340 ): 1576 I/Os completed (+1576) 00:11:58.002 QEMU NVMe Ctrl (12341 ): 1576 I/Os completed (+1576) 00:11:58.002 00:11:58.569 QEMU NVMe Ctrl (12340 ): 3756 I/Os completed (+2180) 00:11:58.569 QEMU NVMe Ctrl (12341 ): 3756 I/Os completed (+2180) 00:11:58.569 00:11:59.949 QEMU NVMe Ctrl (12340 ): 6016 I/Os completed (+2260) 00:11:59.949 QEMU NVMe Ctrl (12341 ): 6016 I/Os completed (+2260) 00:11:59.949 00:12:00.886 QEMU NVMe Ctrl (12340 ): 8244 I/Os completed (+2228) 00:12:00.886 QEMU NVMe Ctrl (12341 ): 8244 I/Os completed (+2228) 00:12:00.886 00:12:01.823 QEMU NVMe Ctrl (12340 ): 10428 I/Os completed (+2184) 00:12:01.823 QEMU NVMe Ctrl (12341 ): 10428 I/Os completed (+2184) 00:12:01.823 00:12:02.392 03:22:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:02.392 03:22:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:02.392 03:22:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:02.392 [2024-11-05 03:22:25.919529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:02.392 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:02.393 [2024-11-05 03:22:25.921236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 [2024-11-05 03:22:25.921302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 [2024-11-05 03:22:25.921325] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 [2024-11-05 03:22:25.921348] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:02.393 [2024-11-05 03:22:25.924046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 [2024-11-05 03:22:25.924093] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 [2024-11-05 03:22:25.924111] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 [2024-11-05 03:22:25.924131] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 EAL: eal_parse_sysfs_value(): cannot read sysfs value /sys/bus/pci/devices/0000:00:10.0/subsystem_device 00:12:02.393 EAL: Scan for (pci) bus failed. 00:12:02.393 03:22:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:02.393 03:22:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:02.393 [2024-11-05 03:22:25.961325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:02.393 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:02.393 [2024-11-05 03:22:25.962887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 [2024-11-05 03:22:25.962936] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 [2024-11-05 03:22:25.962962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 [2024-11-05 03:22:25.962981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:02.393 [2024-11-05 03:22:25.965547] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 [2024-11-05 03:22:25.965592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 [2024-11-05 03:22:25.965613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.393 [2024-11-05 03:22:25.965630] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.651 03:22:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:02.651 03:22:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:02.651 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:02.651 EAL: Scan for (pci) bus failed. 00:12:02.651 03:22:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.651 03:22:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.651 03:22:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:02.651 00:12:02.651 03:22:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:02.651 03:22:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.651 03:22:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.651 03:22:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.651 03:22:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:02.651 Attaching to 0000:00:10.0 00:12:02.651 Attached to 0000:00:10.0 00:12:02.911 03:22:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:02.911 03:22:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.911 03:22:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:02.911 Attaching to 0000:00:11.0 00:12:02.911 Attached to 0000:00:11.0 00:12:03.849 QEMU NVMe Ctrl (12340 ): 2024 I/Os completed (+2024) 00:12:03.849 QEMU NVMe Ctrl (12341 ): 1792 I/Os completed (+1792) 00:12:03.849 00:12:04.787 QEMU NVMe Ctrl (12340 ): 4184 I/Os completed (+2160) 00:12:04.787 QEMU NVMe Ctrl (12341 ): 3955 I/Os completed (+2163) 00:12:04.787 00:12:05.756 QEMU NVMe Ctrl (12340 ): 6099 I/Os completed (+1915) 00:12:05.756 QEMU NVMe Ctrl (12341 ): 5874 I/Os completed (+1919) 00:12:05.756 00:12:06.691 QEMU NVMe Ctrl (12340 ): 8187 I/Os completed (+2088) 00:12:06.691 QEMU NVMe Ctrl (12341 ): 7968 I/Os completed (+2094) 00:12:06.691 00:12:07.628 QEMU NVMe Ctrl (12340 ): 10439 I/Os completed (+2252) 00:12:07.628 QEMU NVMe Ctrl (12341 ): 10220 I/Os completed (+2252) 00:12:07.628 00:12:08.565 QEMU NVMe Ctrl (12340 ): 12603 I/Os completed (+2164) 00:12:08.565 QEMU NVMe Ctrl (12341 ): 12384 I/Os completed (+2164) 00:12:08.565 00:12:09.942 QEMU NVMe Ctrl (12340 ): 14795 I/Os completed (+2192) 00:12:09.942 QEMU NVMe Ctrl (12341 ): 14576 I/Os completed (+2192) 00:12:09.942 00:12:10.916 QEMU NVMe Ctrl (12340 ): 16995 I/Os completed (+2200) 00:12:10.916 QEMU NVMe Ctrl (12341 ): 16776 I/Os completed (+2200) 00:12:10.916 00:12:11.854 QEMU NVMe Ctrl (12340 ): 19175 I/Os completed (+2180) 00:12:11.854 QEMU NVMe Ctrl (12341 ): 18956 I/Os completed (+2180) 00:12:11.854 00:12:12.790 QEMU NVMe Ctrl (12340 ): 21371 I/Os completed (+2196) 00:12:12.790 QEMU NVMe Ctrl (12341 ): 21152 I/Os completed (+2196) 00:12:12.790 00:12:13.726 QEMU NVMe Ctrl (12340 ): 23303 I/Os completed (+1932) 00:12:13.726 QEMU NVMe Ctrl (12341 ): 23084 I/Os completed (+1932) 00:12:13.726 00:12:14.663 QEMU NVMe Ctrl (12340 ): 25199 I/Os completed (+1896) 00:12:14.663 QEMU NVMe Ctrl (12341 ): 24980 I/Os completed (+1896) 00:12:14.663 00:12:14.922 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:14.922 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:14.922 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.922 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.922 [2024-11-05 03:22:38.329673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:14.922 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:14.922 [2024-11-05 03:22:38.331574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.922 [2024-11-05 03:22:38.331742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.922 [2024-11-05 03:22:38.331800] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.922 [2024-11-05 03:22:38.331944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.922 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:14.922 [2024-11-05 03:22:38.334903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.922 [2024-11-05 03:22:38.335054] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.922 [2024-11-05 03:22:38.335080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.922 [2024-11-05 03:22:38.335099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.922 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.922 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.922 [2024-11-05 03:22:38.371184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:14.922 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:14.922 [2024-11-05 03:22:38.372831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.922 [2024-11-05 03:22:38.372879] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.922 [2024-11-05 03:22:38.372907] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.922 [2024-11-05 03:22:38.372926] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.923 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:14.923 [2024-11-05 03:22:38.375533] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.923 [2024-11-05 03:22:38.375578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.923 [2024-11-05 03:22:38.375600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.923 [2024-11-05 03:22:38.375619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.923 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:14.923 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:14.923 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:14.923 EAL: Scan for (pci) bus failed. 00:12:14.923 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.923 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.923 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:15.183 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:15.183 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:15.183 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:15.183 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:15.183 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:15.183 Attaching to 0000:00:10.0 00:12:15.183 Attached to 0000:00:10.0 00:12:15.183 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:15.183 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:15.183 03:22:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:15.183 Attaching to 0000:00:11.0 00:12:15.183 Attached to 0000:00:11.0 00:12:15.752 QEMU NVMe Ctrl (12340 ): 1140 I/Os completed (+1140) 00:12:15.752 QEMU NVMe Ctrl (12341 ): 916 I/Os completed (+916) 00:12:15.752 00:12:16.688 QEMU NVMe Ctrl (12340 ): 3364 I/Os completed (+2224) 00:12:16.688 QEMU NVMe Ctrl (12341 ): 3140 I/Os completed (+2224) 00:12:16.688 00:12:17.625 QEMU NVMe Ctrl (12340 ): 5552 I/Os completed (+2188) 00:12:17.625 QEMU NVMe Ctrl (12341 ): 5328 I/Os completed (+2188) 00:12:17.625 00:12:18.561 QEMU NVMe Ctrl (12340 ): 7508 I/Os completed (+1956) 00:12:18.561 QEMU NVMe Ctrl (12341 ): 7284 I/Os completed (+1956) 00:12:18.561 00:12:19.941 QEMU NVMe Ctrl (12340 ): 9544 I/Os completed (+2036) 00:12:19.941 QEMU NVMe Ctrl (12341 ): 9320 I/Os completed (+2036) 00:12:19.941 00:12:20.879 QEMU NVMe Ctrl (12340 ): 11784 I/Os completed (+2240) 00:12:20.879 QEMU NVMe Ctrl (12341 ): 11560 I/Os completed (+2240) 00:12:20.879 00:12:21.816 QEMU NVMe Ctrl (12340 ): 14028 I/Os completed (+2244) 00:12:21.816 QEMU NVMe Ctrl (12341 ): 13804 I/Os completed (+2244) 00:12:21.816 00:12:22.754 QEMU NVMe Ctrl (12340 ): 16264 I/Os completed (+2236) 00:12:22.754 QEMU NVMe Ctrl (12341 ): 16040 I/Os completed (+2236) 00:12:22.754 00:12:23.690 QEMU NVMe Ctrl (12340 ): 18472 I/Os completed (+2208) 00:12:23.690 QEMU NVMe Ctrl (12341 ): 18248 I/Os completed (+2208) 00:12:23.690 00:12:24.628 QEMU NVMe Ctrl (12340 ): 20708 I/Os completed (+2236) 00:12:24.628 QEMU NVMe Ctrl (12341 ): 20484 I/Os completed (+2236) 00:12:24.628 00:12:25.565 QEMU NVMe Ctrl (12340 ): 22944 I/Os completed (+2236) 00:12:25.565 QEMU NVMe Ctrl (12341 ): 22720 I/Os completed (+2236) 00:12:25.565 00:12:26.531 QEMU NVMe Ctrl (12340 ): 25148 I/Os completed (+2204) 00:12:26.531 QEMU NVMe Ctrl (12341 ): 24929 I/Os completed (+2209) 00:12:26.531 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:27.470 [2024-11-05 03:22:50.714449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:27.470 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:27.470 [2024-11-05 03:22:50.716213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 [2024-11-05 03:22:50.716390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 [2024-11-05 03:22:50.716444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 [2024-11-05 03:22:50.716571] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:27.470 [2024-11-05 03:22:50.719540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 [2024-11-05 03:22:50.719673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 [2024-11-05 03:22:50.719725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 [2024-11-05 03:22:50.719769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:27.470 [2024-11-05 03:22:50.750824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:27.470 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:27.470 [2024-11-05 03:22:50.752591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 [2024-11-05 03:22:50.752682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 [2024-11-05 03:22:50.752731] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 [2024-11-05 03:22:50.752871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:27.470 [2024-11-05 03:22:50.755629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 [2024-11-05 03:22:50.755706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 [2024-11-05 03:22:50.755755] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 [2024-11-05 03:22:50.755849] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:27.470 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:27.470 EAL: Scan for (pci) bus failed. 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:27.470 03:22:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:27.470 Attaching to 0000:00:10.0 00:12:27.470 Attached to 0000:00:10.0 00:12:27.729 03:22:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:27.730 03:22:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:27.730 03:22:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:27.730 Attaching to 0000:00:11.0 00:12:27.730 Attached to 0000:00:11.0 00:12:27.730 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:27.730 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:27.730 [2024-11-05 03:22:51.086098] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:39.950 03:23:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:39.950 03:23:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:39.950 03:23:03 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.17 00:12:39.950 03:23:03 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.17 00:12:39.950 03:23:03 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:39.950 03:23:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.17 00:12:39.950 03:23:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.17 2 00:12:39.950 remove_attach_helper took 43.17s to complete (handling 2 nvme drive(s)) 03:23:03 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:46.516 03:23:09 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68215 00:12:46.516 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68215) - No such process 00:12:46.516 03:23:09 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68215 00:12:46.516 03:23:09 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:46.516 03:23:09 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:46.516 03:23:09 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:46.516 03:23:09 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68750 00:12:46.516 03:23:09 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:46.516 03:23:09 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:46.516 03:23:09 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68750 00:12:46.516 03:23:09 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 68750 ']' 00:12:46.516 03:23:09 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.516 03:23:09 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:46.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.516 03:23:09 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.516 03:23:09 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:46.516 03:23:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:46.516 [2024-11-05 03:23:09.201917] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:12:46.516 [2024-11-05 03:23:09.202045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68750 ] 00:12:46.516 [2024-11-05 03:23:09.385717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.516 [2024-11-05 03:23:09.504490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.084 03:23:10 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:47.084 03:23:10 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:12:47.084 03:23:10 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:47.084 03:23:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.084 03:23:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:47.084 03:23:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.084 03:23:10 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:47.084 03:23:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:47.084 03:23:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:47.084 03:23:10 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:47.084 03:23:10 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:47.084 03:23:10 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:47.084 03:23:10 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:47.084 03:23:10 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:47.084 03:23:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:47.084 03:23:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:47.084 03:23:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:47.084 03:23:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:47.084 03:23:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:53.653 03:23:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:53.653 03:23:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:53.653 03:23:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:53.653 03:23:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:53.653 03:23:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:53.653 03:23:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:53.653 03:23:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:53.653 03:23:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:53.653 03:23:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:53.653 03:23:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:53.653 03:23:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:53.653 03:23:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.653 03:23:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.653 [2024-11-05 03:23:16.471864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:53.653 [2024-11-05 03:23:16.474238] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.653 [2024-11-05 03:23:16.474304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.653 [2024-11-05 03:23:16.474329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.653 [2024-11-05 03:23:16.474361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.653 [2024-11-05 03:23:16.474377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.654 [2024-11-05 03:23:16.474394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.654 [2024-11-05 03:23:16.474411] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.654 [2024-11-05 03:23:16.474428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.654 [2024-11-05 03:23:16.474444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.654 [2024-11-05 03:23:16.474468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.654 [2024-11-05 03:23:16.474482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.654 [2024-11-05 03:23:16.474500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.654 03:23:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.654 03:23:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:53.654 03:23:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:53.654 [2024-11-05 03:23:16.871219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:53.654 [2024-11-05 03:23:16.873650] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.654 [2024-11-05 03:23:16.873693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.654 [2024-11-05 03:23:16.873713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.654 [2024-11-05 03:23:16.873736] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.654 [2024-11-05 03:23:16.873751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.654 [2024-11-05 03:23:16.873763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.654 [2024-11-05 03:23:16.873778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.654 [2024-11-05 03:23:16.873789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.654 [2024-11-05 03:23:16.873803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.654 [2024-11-05 03:23:16.873815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.654 [2024-11-05 03:23:16.873828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.654 [2024-11-05 03:23:16.873840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.654 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:53.654 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:53.654 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:53.654 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:53.654 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:53.654 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:53.654 03:23:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.654 03:23:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.654 03:23:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.654 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:53.654 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:53.654 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:53.654 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:53.654 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:53.913 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:53.913 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:53.913 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:53.913 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:53.913 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:53.913 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:53.913 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:53.913 03:23:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:06.125 03:23:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.125 03:23:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:06.125 03:23:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:06.125 03:23:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.125 03:23:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:06.125 [2024-11-05 03:23:29.550818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:06.125 [2024-11-05 03:23:29.553418] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.125 [2024-11-05 03:23:29.553570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.125 [2024-11-05 03:23:29.553806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.125 [2024-11-05 03:23:29.553932] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.125 [2024-11-05 03:23:29.553971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.125 [2024-11-05 03:23:29.554076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.125 [2024-11-05 03:23:29.554134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.125 [2024-11-05 03:23:29.554223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.125 [2024-11-05 03:23:29.554281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.125 [2024-11-05 03:23:29.554398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.125 [2024-11-05 03:23:29.554437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.125 [2024-11-05 03:23:29.554645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.125 03:23:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:06.125 03:23:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:06.383 [2024-11-05 03:23:29.950176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:06.384 [2024-11-05 03:23:29.952787] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.384 [2024-11-05 03:23:29.952935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.384 [2024-11-05 03:23:29.953097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.384 [2024-11-05 03:23:29.953166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.384 [2024-11-05 03:23:29.953253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.384 [2024-11-05 03:23:29.953329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.384 [2024-11-05 03:23:29.953427] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.384 [2024-11-05 03:23:29.953464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.384 [2024-11-05 03:23:29.953690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.384 [2024-11-05 03:23:29.953745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.384 [2024-11-05 03:23:29.953780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.384 [2024-11-05 03:23:29.953829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.643 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:06.643 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:06.643 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:06.643 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:06.643 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:06.643 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:06.643 03:23:30 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.643 03:23:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:06.643 03:23:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.643 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:06.643 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:06.902 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:06.902 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:06.902 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:06.902 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:06.902 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:06.902 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:06.902 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:06.902 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:06.902 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:06.902 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:06.902 03:23:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:19.114 03:23:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.114 03:23:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:19.114 03:23:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:19.114 03:23:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:19.114 03:23:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:19.114 03:23:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:19.114 03:23:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:19.114 [2024-11-05 03:23:42.629769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:19.114 [2024-11-05 03:23:42.632447] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.114 [2024-11-05 03:23:42.632597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.114 [2024-11-05 03:23:42.632786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.114 [2024-11-05 03:23:42.632864] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.114 [2024-11-05 03:23:42.632956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.114 [2024-11-05 03:23:42.633025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.114 [2024-11-05 03:23:42.633133] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.114 [2024-11-05 03:23:42.633180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.114 [2024-11-05 03:23:42.633296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.114 [2024-11-05 03:23:42.633366] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.114 [2024-11-05 03:23:42.633400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.114 [2024-11-05 03:23:42.633510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.683 [2024-11-05 03:23:43.029149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:19.683 [2024-11-05 03:23:43.031892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.683 [2024-11-05 03:23:43.032043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.683 [2024-11-05 03:23:43.032214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.683 [2024-11-05 03:23:43.032279] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.683 [2024-11-05 03:23:43.032384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.683 [2024-11-05 03:23:43.032440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.683 [2024-11-05 03:23:43.032538] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.683 [2024-11-05 03:23:43.032575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.683 [2024-11-05 03:23:43.032631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.683 [2024-11-05 03:23:43.032682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.683 [2024-11-05 03:23:43.032859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.683 [2024-11-05 03:23:43.032998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.683 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:19.683 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:19.683 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:19.683 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:19.683 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:19.683 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:19.683 03:23:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.683 03:23:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:19.683 03:23:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.683 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:19.683 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:19.943 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:19.943 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:19.943 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:19.943 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:19.943 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:19.943 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:19.943 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:19.943 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:19.943 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:19.943 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:19.943 03:23:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.18 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.18 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 00:13:32.154 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:13:32.154 03:23:55 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:32.154 03:23:55 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:38.045 03:24:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:38.045 03:24:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:38.045 03:24:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:38.304 03:24:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:38.304 03:24:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:38.304 03:24:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:38.304 03:24:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:38.304 03:24:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:38.304 03:24:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:38.304 03:24:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:38.304 03:24:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:38.304 03:24:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.304 03:24:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:38.304 [2024-11-05 03:24:01.693584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:38.304 [2024-11-05 03:24:01.695941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.304 [2024-11-05 03:24:01.695987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.304 [2024-11-05 03:24:01.696004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.304 [2024-11-05 03:24:01.696033] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.304 [2024-11-05 03:24:01.696044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.304 [2024-11-05 03:24:01.696059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.304 [2024-11-05 03:24:01.696072] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.304 [2024-11-05 03:24:01.696086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.304 [2024-11-05 03:24:01.696098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.304 [2024-11-05 03:24:01.696113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.304 [2024-11-05 03:24:01.696124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.304 [2024-11-05 03:24:01.696141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.304 03:24:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.304 03:24:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:38.304 03:24:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:38.563 [2024-11-05 03:24:02.092938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:38.563 [2024-11-05 03:24:02.097393] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.563 [2024-11-05 03:24:02.097433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.563 [2024-11-05 03:24:02.097467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.563 [2024-11-05 03:24:02.097490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.563 [2024-11-05 03:24:02.097504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.563 [2024-11-05 03:24:02.097516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.563 [2024-11-05 03:24:02.097532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.563 [2024-11-05 03:24:02.097543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.563 [2024-11-05 03:24:02.097557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.563 [2024-11-05 03:24:02.097569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.563 [2024-11-05 03:24:02.097582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.563 [2024-11-05 03:24:02.097594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.822 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:38.822 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:38.822 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:38.822 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:38.822 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:38.822 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:38.822 03:24:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.822 03:24:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:38.822 03:24:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.823 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:38.823 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:38.823 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:38.823 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:38.823 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:39.081 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:39.081 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:39.082 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:39.082 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:39.082 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:39.082 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:39.082 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:39.082 03:24:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:51.293 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:51.293 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:51.293 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:51.293 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:51.293 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:51.293 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:51.293 03:24:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.293 03:24:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:51.293 03:24:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.293 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:51.293 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:51.293 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:51.293 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:51.293 [2024-11-05 03:24:14.672711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:51.293 [2024-11-05 03:24:14.674798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.293 [2024-11-05 03:24:14.674963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.293 [2024-11-05 03:24:14.675079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.293 [2024-11-05 03:24:14.675201] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.293 [2024-11-05 03:24:14.675238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.293 [2024-11-05 03:24:14.675390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.293 [2024-11-05 03:24:14.675485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.293 [2024-11-05 03:24:14.675525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.293 [2024-11-05 03:24:14.675574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.293 [2024-11-05 03:24:14.675676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.293 [2024-11-05 03:24:14.675715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.293 [2024-11-05 03:24:14.675766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.293 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:51.293 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:51.293 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:51.294 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:51.294 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:51.294 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:51.294 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:51.294 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:51.294 03:24:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.294 03:24:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:51.294 03:24:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.294 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:51.294 03:24:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:51.861 [2024-11-05 03:24:15.171896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:51.861 [2024-11-05 03:24:15.174363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.861 [2024-11-05 03:24:15.174541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.861 [2024-11-05 03:24:15.174667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.861 [2024-11-05 03:24:15.174742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.861 [2024-11-05 03:24:15.174783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.862 [2024-11-05 03:24:15.174884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.862 [2024-11-05 03:24:15.174944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.862 [2024-11-05 03:24:15.174976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.862 [2024-11-05 03:24:15.175090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.862 [2024-11-05 03:24:15.175150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.862 [2024-11-05 03:24:15.175185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.862 [2024-11-05 03:24:15.175381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.862 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:51.862 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:51.862 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:51.862 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:51.862 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:51.862 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:51.862 03:24:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.862 03:24:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:51.862 03:24:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.862 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:51.862 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:51.862 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:51.862 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:51.862 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:52.121 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:52.121 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:52.121 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:52.121 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:52.121 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:52.121 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:52.121 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:52.121 03:24:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:04.351 03:24:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.351 03:24:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:04.351 03:24:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:04.351 [2024-11-05 03:24:27.751663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:04.351 [2024-11-05 03:24:27.754353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:04.351 [2024-11-05 03:24:27.754570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.351 [2024-11-05 03:24:27.754696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.351 [2024-11-05 03:24:27.754778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:04.351 [2024-11-05 03:24:27.754987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.351 [2024-11-05 03:24:27.755097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.351 [2024-11-05 03:24:27.755141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:04.351 [2024-11-05 03:24:27.755160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.351 [2024-11-05 03:24:27.755173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.351 [2024-11-05 03:24:27.755188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:04.351 [2024-11-05 03:24:27.755200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.351 [2024-11-05 03:24:27.755214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:04.351 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:04.352 03:24:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.352 03:24:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:04.352 03:24:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.352 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:04.352 03:24:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:04.611 [2024-11-05 03:24:28.151001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:04.611 [2024-11-05 03:24:28.153565] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:04.611 [2024-11-05 03:24:28.153604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.611 [2024-11-05 03:24:28.153623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.611 [2024-11-05 03:24:28.153645] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:04.611 [2024-11-05 03:24:28.153659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.611 [2024-11-05 03:24:28.153672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.611 [2024-11-05 03:24:28.153687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:04.611 [2024-11-05 03:24:28.153697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.611 [2024-11-05 03:24:28.153711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.611 [2024-11-05 03:24:28.153724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:04.611 [2024-11-05 03:24:28.153740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.611 [2024-11-05 03:24:28.153752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.869 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:04.870 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:04.870 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:04.870 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:04.870 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:04.870 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:04.870 03:24:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.870 03:24:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:04.870 03:24:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.870 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:04.870 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:05.128 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:05.128 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:05.128 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:05.128 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:05.128 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:05.128 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:05.128 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:05.128 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:05.128 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:05.128 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:05.128 03:24:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:17.338 03:24:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:17.338 03:24:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:17.338 03:24:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:17.338 03:24:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:17.338 03:24:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:17.338 03:24:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.338 03:24:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:17.338 03:24:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.14 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.14 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:14:17.338 03:24:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.14 00:14:17.338 03:24:40 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.14 2 00:14:17.338 remove_attach_helper took 45.14s to complete (handling 2 nvme drive(s)) 03:24:40 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:17.338 03:24:40 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68750 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 68750 ']' 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 68750 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68750 00:14:17.338 killing process with pid 68750 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68750' 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@971 -- # kill 68750 00:14:17.338 03:24:40 sw_hotplug -- common/autotest_common.sh@976 -- # wait 68750 00:14:19.876 03:24:43 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:20.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:20.707 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:20.707 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:20.967 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:20.967 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:20.967 00:14:20.967 real 2m33.693s 00:14:20.967 user 1m51.072s 00:14:20.967 sys 0m22.827s 00:14:20.967 ************************************ 00:14:20.967 END TEST sw_hotplug 00:14:20.967 ************************************ 00:14:20.967 03:24:44 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:20.967 03:24:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:21.227 03:24:44 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:14:21.227 03:24:44 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:21.227 03:24:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:21.227 03:24:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:21.227 03:24:44 -- common/autotest_common.sh@10 -- # set +x 00:14:21.227 ************************************ 00:14:21.227 START TEST nvme_xnvme 00:14:21.227 ************************************ 00:14:21.227 03:24:44 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:21.227 * Looking for test storage... 00:14:21.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:21.227 03:24:44 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:21.227 03:24:44 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:21.227 03:24:44 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:14:21.227 03:24:44 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:21.227 03:24:44 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.227 03:24:44 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:21.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.227 --rc genhtml_branch_coverage=1 00:14:21.227 --rc genhtml_function_coverage=1 00:14:21.227 --rc genhtml_legend=1 00:14:21.227 --rc geninfo_all_blocks=1 00:14:21.227 --rc geninfo_unexecuted_blocks=1 00:14:21.227 00:14:21.227 ' 00:14:21.227 03:24:44 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:21.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.227 --rc genhtml_branch_coverage=1 00:14:21.227 --rc genhtml_function_coverage=1 00:14:21.227 --rc genhtml_legend=1 00:14:21.227 --rc geninfo_all_blocks=1 00:14:21.227 --rc geninfo_unexecuted_blocks=1 00:14:21.227 00:14:21.227 ' 00:14:21.227 03:24:44 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:21.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.227 --rc genhtml_branch_coverage=1 00:14:21.227 --rc genhtml_function_coverage=1 00:14:21.227 --rc genhtml_legend=1 00:14:21.227 --rc geninfo_all_blocks=1 00:14:21.227 --rc geninfo_unexecuted_blocks=1 00:14:21.227 00:14:21.227 ' 00:14:21.227 03:24:44 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:21.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.227 --rc genhtml_branch_coverage=1 00:14:21.227 --rc genhtml_function_coverage=1 00:14:21.227 --rc genhtml_legend=1 00:14:21.227 --rc geninfo_all_blocks=1 00:14:21.227 --rc geninfo_unexecuted_blocks=1 00:14:21.227 00:14:21.227 ' 00:14:21.227 03:24:44 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.227 03:24:44 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.488 03:24:44 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.488 03:24:44 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.488 03:24:44 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.488 03:24:44 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.488 03:24:44 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.488 03:24:44 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.488 03:24:44 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:21.488 03:24:44 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.488 03:24:44 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:14:21.488 03:24:44 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:21.488 03:24:44 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:21.488 03:24:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:21.488 ************************************ 00:14:21.488 START TEST xnvme_to_malloc_dd_copy 00:14:21.488 ************************************ 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:21.488 03:24:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:21.488 { 00:14:21.488 "subsystems": [ 00:14:21.488 { 00:14:21.488 "subsystem": "bdev", 00:14:21.488 "config": [ 00:14:21.488 { 00:14:21.488 "params": { 00:14:21.488 "block_size": 512, 00:14:21.488 "num_blocks": 2097152, 00:14:21.488 "name": "malloc0" 00:14:21.488 }, 00:14:21.488 "method": "bdev_malloc_create" 00:14:21.488 }, 00:14:21.488 { 00:14:21.488 "params": { 00:14:21.488 "io_mechanism": "libaio", 00:14:21.488 "filename": "/dev/nullb0", 00:14:21.488 "name": "null0" 00:14:21.488 }, 00:14:21.488 "method": "bdev_xnvme_create" 00:14:21.488 }, 00:14:21.488 { 00:14:21.488 "method": "bdev_wait_for_examine" 00:14:21.488 } 00:14:21.488 ] 00:14:21.488 } 00:14:21.488 ] 00:14:21.488 } 00:14:21.488 [2024-11-05 03:24:44.949890] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:14:21.488 [2024-11-05 03:24:44.950005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70126 ] 00:14:21.748 [2024-11-05 03:24:45.131048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.748 [2024-11-05 03:24:45.245027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.288  [2024-11-05T03:24:48.812Z] Copying: 260/1024 [MB] (260 MBps) [2024-11-05T03:24:49.784Z] Copying: 507/1024 [MB] (246 MBps) [2024-11-05T03:24:50.720Z] Copying: 754/1024 [MB] (246 MBps) [2024-11-05T03:24:50.980Z] Copying: 999/1024 [MB] (245 MBps) [2024-11-05T03:24:55.177Z] Copying: 1024/1024 [MB] (average 250 MBps) 00:14:31.593 00:14:31.593 03:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:31.593 03:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:31.593 03:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:31.593 03:24:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:31.593 { 00:14:31.593 "subsystems": [ 00:14:31.593 { 00:14:31.593 "subsystem": "bdev", 00:14:31.593 "config": [ 00:14:31.593 { 00:14:31.593 "params": { 00:14:31.593 "block_size": 512, 00:14:31.593 "num_blocks": 2097152, 00:14:31.593 "name": "malloc0" 00:14:31.593 }, 00:14:31.593 "method": "bdev_malloc_create" 00:14:31.593 }, 00:14:31.593 { 00:14:31.593 "params": { 00:14:31.593 "io_mechanism": "libaio", 00:14:31.593 "filename": "/dev/nullb0", 00:14:31.593 "name": "null0" 00:14:31.593 }, 00:14:31.593 "method": "bdev_xnvme_create" 00:14:31.593 }, 00:14:31.593 { 00:14:31.593 "method": "bdev_wait_for_examine" 00:14:31.593 } 00:14:31.593 ] 00:14:31.593 } 00:14:31.593 ] 00:14:31.593 } 00:14:31.593 [2024-11-05 03:24:54.732054] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:14:31.593 [2024-11-05 03:24:54.732182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70242 ] 00:14:31.593 [2024-11-05 03:24:54.914096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.593 [2024-11-05 03:24:55.024046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.135  [2024-11-05T03:24:58.657Z] Copying: 262/1024 [MB] (262 MBps) [2024-11-05T03:24:59.595Z] Copying: 510/1024 [MB] (247 MBps) [2024-11-05T03:25:00.532Z] Copying: 758/1024 [MB] (248 MBps) [2024-11-05T03:25:00.532Z] Copying: 1006/1024 [MB] (247 MBps) [2024-11-05T03:25:04.730Z] Copying: 1024/1024 [MB] (average 251 MBps) 00:14:41.146 00:14:41.146 03:25:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:41.146 03:25:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:41.146 03:25:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:41.146 03:25:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:41.146 03:25:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:41.146 03:25:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:41.146 { 00:14:41.146 "subsystems": [ 00:14:41.146 { 00:14:41.146 "subsystem": "bdev", 00:14:41.146 "config": [ 00:14:41.146 { 00:14:41.146 "params": { 00:14:41.146 "block_size": 512, 00:14:41.146 "num_blocks": 2097152, 00:14:41.146 "name": "malloc0" 00:14:41.146 }, 00:14:41.146 "method": "bdev_malloc_create" 00:14:41.146 }, 00:14:41.146 { 00:14:41.146 "params": { 00:14:41.146 "io_mechanism": "io_uring", 00:14:41.146 "filename": "/dev/nullb0", 00:14:41.146 "name": "null0" 00:14:41.146 }, 00:14:41.146 "method": "bdev_xnvme_create" 00:14:41.146 }, 00:14:41.146 { 00:14:41.146 "method": "bdev_wait_for_examine" 00:14:41.146 } 00:14:41.146 ] 00:14:41.146 } 00:14:41.146 ] 00:14:41.146 } 00:14:41.146 [2024-11-05 03:25:04.489636] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:14:41.146 [2024-11-05 03:25:04.489755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70346 ] 00:14:41.146 [2024-11-05 03:25:04.670756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.405 [2024-11-05 03:25:04.782494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.945  [2024-11-05T03:25:08.468Z] Copying: 272/1024 [MB] (272 MBps) [2024-11-05T03:25:09.407Z] Copying: 549/1024 [MB] (276 MBps) [2024-11-05T03:25:10.000Z] Copying: 824/1024 [MB] (274 MBps) [2024-11-05T03:25:14.199Z] Copying: 1024/1024 [MB] (average 275 MBps) 00:14:50.615 00:14:50.615 03:25:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:50.615 03:25:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:50.615 03:25:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:50.615 03:25:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:50.615 { 00:14:50.615 "subsystems": [ 00:14:50.615 { 00:14:50.615 "subsystem": "bdev", 00:14:50.615 "config": [ 00:14:50.615 { 00:14:50.615 "params": { 00:14:50.615 "block_size": 512, 00:14:50.615 "num_blocks": 2097152, 00:14:50.615 "name": "malloc0" 00:14:50.615 }, 00:14:50.615 "method": "bdev_malloc_create" 00:14:50.615 }, 00:14:50.615 { 00:14:50.615 "params": { 00:14:50.615 "io_mechanism": "io_uring", 00:14:50.615 "filename": "/dev/nullb0", 00:14:50.615 "name": "null0" 00:14:50.615 }, 00:14:50.615 "method": "bdev_xnvme_create" 00:14:50.615 }, 00:14:50.615 { 00:14:50.615 "method": "bdev_wait_for_examine" 00:14:50.615 } 00:14:50.615 ] 00:14:50.615 } 00:14:50.615 ] 00:14:50.615 } 00:14:50.615 [2024-11-05 03:25:13.843132] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:14:50.615 [2024-11-05 03:25:13.843250] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70456 ] 00:14:50.615 [2024-11-05 03:25:14.021815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.615 [2024-11-05 03:25:14.136906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.150  [2024-11-05T03:25:17.671Z] Copying: 277/1024 [MB] (277 MBps) [2024-11-05T03:25:18.608Z] Copying: 556/1024 [MB] (279 MBps) [2024-11-05T03:25:19.551Z] Copying: 837/1024 [MB] (281 MBps) [2024-11-05T03:25:23.743Z] Copying: 1024/1024 [MB] (average 279 MBps) 00:15:00.159 00:15:00.159 03:25:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:15:00.159 03:25:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:15:00.159 00:15:00.159 real 0m38.314s 00:15:00.159 user 0m33.443s 00:15:00.159 sys 0m4.389s 00:15:00.159 03:25:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:00.159 03:25:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:00.159 ************************************ 00:15:00.159 END TEST xnvme_to_malloc_dd_copy 00:15:00.159 ************************************ 00:15:00.159 03:25:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:00.159 03:25:23 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:00.159 03:25:23 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:00.159 03:25:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:00.159 ************************************ 00:15:00.159 START TEST xnvme_bdevperf 00:15:00.159 ************************************ 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:00.159 03:25:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:00.159 { 00:15:00.159 "subsystems": [ 00:15:00.159 { 00:15:00.159 "subsystem": "bdev", 00:15:00.159 "config": [ 00:15:00.159 { 00:15:00.159 "params": { 00:15:00.159 "io_mechanism": "libaio", 00:15:00.159 "filename": "/dev/nullb0", 00:15:00.159 "name": "null0" 00:15:00.159 }, 00:15:00.159 "method": "bdev_xnvme_create" 00:15:00.159 }, 00:15:00.159 { 00:15:00.159 "method": "bdev_wait_for_examine" 00:15:00.159 } 00:15:00.159 ] 00:15:00.159 } 00:15:00.159 ] 00:15:00.159 } 00:15:00.159 [2024-11-05 03:25:23.327273] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:15:00.159 [2024-11-05 03:25:23.327442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70585 ] 00:15:00.159 [2024-11-05 03:25:23.510564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.159 [2024-11-05 03:25:23.624741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.418 Running I/O for 5 seconds... 00:15:02.744 154176.00 IOPS, 602.25 MiB/s [2024-11-05T03:25:27.265Z] 153632.00 IOPS, 600.12 MiB/s [2024-11-05T03:25:28.201Z] 153450.67 IOPS, 599.42 MiB/s [2024-11-05T03:25:29.139Z] 153584.00 IOPS, 599.94 MiB/s 00:15:05.555 Latency(us) 00:15:05.555 [2024-11-05T03:25:29.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.555 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:05.555 null0 : 5.00 153678.83 600.31 0.00 0.00 414.08 109.39 1776.58 00:15:05.555 [2024-11-05T03:25:29.139Z] =================================================================================================================== 00:15:05.555 [2024-11-05T03:25:29.139Z] Total : 153678.83 600.31 0.00 0.00 414.08 109.39 1776.58 00:15:06.938 03:25:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:06.938 03:25:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:06.938 03:25:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:06.938 03:25:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:06.938 03:25:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:06.938 03:25:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:06.938 { 00:15:06.938 "subsystems": [ 00:15:06.938 { 00:15:06.938 "subsystem": "bdev", 00:15:06.938 "config": [ 00:15:06.938 { 00:15:06.938 "params": { 00:15:06.938 "io_mechanism": "io_uring", 00:15:06.938 "filename": "/dev/nullb0", 00:15:06.938 "name": "null0" 00:15:06.938 }, 00:15:06.938 "method": "bdev_xnvme_create" 00:15:06.938 }, 00:15:06.938 { 00:15:06.938 "method": "bdev_wait_for_examine" 00:15:06.938 } 00:15:06.938 ] 00:15:06.938 } 00:15:06.938 ] 00:15:06.938 } 00:15:06.938 [2024-11-05 03:25:30.183944] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:15:06.938 [2024-11-05 03:25:30.184059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70665 ] 00:15:06.938 [2024-11-05 03:25:30.362327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.938 [2024-11-05 03:25:30.474358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.507 Running I/O for 5 seconds... 00:15:09.381 202880.00 IOPS, 792.50 MiB/s [2024-11-05T03:25:33.903Z] 202048.00 IOPS, 789.25 MiB/s [2024-11-05T03:25:34.839Z] 201664.00 IOPS, 787.75 MiB/s [2024-11-05T03:25:35.840Z] 201984.00 IOPS, 789.00 MiB/s [2024-11-05T03:25:35.840Z] 202086.40 IOPS, 789.40 MiB/s 00:15:12.256 Latency(us) 00:15:12.256 [2024-11-05T03:25:35.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.256 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:12.256 null0 : 5.00 202015.16 789.12 0.00 0.00 314.48 284.58 1671.30 00:15:12.256 [2024-11-05T03:25:35.840Z] =================================================================================================================== 00:15:12.256 [2024-11-05T03:25:35.840Z] Total : 202015.16 789.12 0.00 0.00 314.48 284.58 1671.30 00:15:13.635 03:25:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:15:13.635 03:25:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:15:13.635 00:15:13.635 real 0m13.734s 00:15:13.635 user 0m10.173s 00:15:13.635 sys 0m3.363s 00:15:13.635 03:25:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:13.635 03:25:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:13.635 ************************************ 00:15:13.635 END TEST xnvme_bdevperf 00:15:13.635 ************************************ 00:15:13.635 ************************************ 00:15:13.635 END TEST nvme_xnvme 00:15:13.635 ************************************ 00:15:13.635 00:15:13.635 real 0m52.433s 00:15:13.635 user 0m43.800s 00:15:13.635 sys 0m7.961s 00:15:13.635 03:25:37 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:13.635 03:25:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.635 03:25:37 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:13.635 03:25:37 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:13.635 03:25:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:13.635 03:25:37 -- common/autotest_common.sh@10 -- # set +x 00:15:13.635 ************************************ 00:15:13.635 START TEST blockdev_xnvme 00:15:13.635 ************************************ 00:15:13.635 03:25:37 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:13.635 * Looking for test storage... 00:15:13.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:13.635 03:25:37 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:13.635 03:25:37 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:15:13.635 03:25:37 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:13.894 03:25:37 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:13.894 03:25:37 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:13.894 03:25:37 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:13.894 03:25:37 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:13.894 03:25:37 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.894 03:25:37 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:13.894 03:25:37 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:13.895 03:25:37 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:13.895 03:25:37 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.895 03:25:37 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:13.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.895 --rc genhtml_branch_coverage=1 00:15:13.895 --rc genhtml_function_coverage=1 00:15:13.895 --rc genhtml_legend=1 00:15:13.895 --rc geninfo_all_blocks=1 00:15:13.895 --rc geninfo_unexecuted_blocks=1 00:15:13.895 00:15:13.895 ' 00:15:13.895 03:25:37 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:13.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.895 --rc genhtml_branch_coverage=1 00:15:13.895 --rc genhtml_function_coverage=1 00:15:13.895 --rc genhtml_legend=1 00:15:13.895 --rc geninfo_all_blocks=1 00:15:13.895 --rc geninfo_unexecuted_blocks=1 00:15:13.895 00:15:13.895 ' 00:15:13.895 03:25:37 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:13.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.895 --rc genhtml_branch_coverage=1 00:15:13.895 --rc genhtml_function_coverage=1 00:15:13.895 --rc genhtml_legend=1 00:15:13.895 --rc geninfo_all_blocks=1 00:15:13.895 --rc geninfo_unexecuted_blocks=1 00:15:13.895 00:15:13.895 ' 00:15:13.895 03:25:37 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:13.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.895 --rc genhtml_branch_coverage=1 00:15:13.895 --rc genhtml_function_coverage=1 00:15:13.895 --rc genhtml_legend=1 00:15:13.895 --rc geninfo_all_blocks=1 00:15:13.895 --rc geninfo_unexecuted_blocks=1 00:15:13.895 00:15:13.895 ' 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=70818 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:13.895 03:25:37 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 70818 00:15:13.895 03:25:37 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 70818 ']' 00:15:13.895 03:25:37 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.895 03:25:37 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:13.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.895 03:25:37 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.895 03:25:37 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:13.895 03:25:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.895 [2024-11-05 03:25:37.444993] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:15:13.895 [2024-11-05 03:25:37.445119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70818 ] 00:15:14.155 [2024-11-05 03:25:37.623839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.155 [2024-11-05 03:25:37.735183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.091 03:25:38 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:15.091 03:25:38 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:15:15.091 03:25:38 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:15.091 03:25:38 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:15:15.091 03:25:38 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:15.091 03:25:38 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:15.091 03:25:38 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:15.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:15.917 Waiting for block devices as requested 00:15:15.917 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:16.180 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:16.180 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:16.180 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:21.457 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:21.457 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:21.457 03:25:44 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:21.457 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:21.457 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:21.457 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:21.457 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:21.457 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:21.457 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:21.457 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:21.457 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:21.457 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:21.457 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:15:21.458 nvme0n1 00:15:21.458 nvme1n1 00:15:21.458 nvme2n1 00:15:21.458 nvme2n2 00:15:21.458 nvme2n3 00:15:21.458 nvme3n1 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.458 03:25:44 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:21.458 03:25:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.458 03:25:45 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.458 03:25:45 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:21.458 03:25:45 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "705c81a3-aa6d-4912-a31c-b4343361ace6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "705c81a3-aa6d-4912-a31c-b4343361ace6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2abadaf4-edbb-4876-95b7-739340a9b1e5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2abadaf4-edbb-4876-95b7-739340a9b1e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2d59f524-8722-4e0b-8c79-ebe9ed5c90c4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2d59f524-8722-4e0b-8c79-ebe9ed5c90c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "435139e5-6736-41f5-8647-fa466df0f040"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "435139e5-6736-41f5-8647-fa466df0f040",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "f4fda5b3-65c3-43c7-97d2-b4ae9ddfca5d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f4fda5b3-65c3-43c7-97d2-b4ae9ddfca5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a7ed5202-ad20-4d32-a197-843d5215b6e5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a7ed5202-ad20-4d32-a197-843d5215b6e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:21.458 03:25:45 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:21.718 03:25:45 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:21.718 03:25:45 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:21.718 03:25:45 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:21.718 03:25:45 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 70818 00:15:21.718 03:25:45 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 70818 ']' 00:15:21.718 03:25:45 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 70818 00:15:21.718 03:25:45 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:15:21.718 03:25:45 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:21.718 03:25:45 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70818 00:15:21.718 03:25:45 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:21.718 03:25:45 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:21.718 killing process with pid 70818 00:15:21.718 03:25:45 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70818' 00:15:21.718 03:25:45 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 70818 00:15:21.718 03:25:45 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 70818 00:15:24.282 03:25:47 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:24.282 03:25:47 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:24.282 03:25:47 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:24.282 03:25:47 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:24.282 03:25:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.282 ************************************ 00:15:24.282 START TEST bdev_hello_world 00:15:24.282 ************************************ 00:15:24.282 03:25:47 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:24.282 [2024-11-05 03:25:47.570120] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:15:24.282 [2024-11-05 03:25:47.570262] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71194 ] 00:15:24.282 [2024-11-05 03:25:47.752346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.541 [2024-11-05 03:25:47.868587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.801 [2024-11-05 03:25:48.305014] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:24.801 [2024-11-05 03:25:48.305065] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:24.801 [2024-11-05 03:25:48.305094] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:24.801 [2024-11-05 03:25:48.307272] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:24.801 [2024-11-05 03:25:48.307753] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:24.801 [2024-11-05 03:25:48.307788] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:24.801 [2024-11-05 03:25:48.308078] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:24.801 00:15:24.801 [2024-11-05 03:25:48.308119] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:26.180 00:15:26.180 real 0m1.942s 00:15:26.180 user 0m1.578s 00:15:26.180 sys 0m0.248s 00:15:26.180 03:25:49 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:26.180 03:25:49 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:26.180 ************************************ 00:15:26.180 END TEST bdev_hello_world 00:15:26.180 ************************************ 00:15:26.180 03:25:49 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:26.180 03:25:49 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:26.180 03:25:49 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:26.180 03:25:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:26.180 ************************************ 00:15:26.180 START TEST bdev_bounds 00:15:26.180 ************************************ 00:15:26.180 03:25:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:15:26.180 03:25:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71236 00:15:26.180 03:25:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:26.180 03:25:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:26.180 Process bdevio pid: 71236 00:15:26.180 03:25:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71236' 00:15:26.180 03:25:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71236 00:15:26.180 03:25:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 71236 ']' 00:15:26.180 03:25:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.180 03:25:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:26.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.180 03:25:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.180 03:25:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:26.180 03:25:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:26.180 [2024-11-05 03:25:49.584457] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:15:26.180 [2024-11-05 03:25:49.584585] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71236 ] 00:15:26.439 [2024-11-05 03:25:49.765667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:26.439 [2024-11-05 03:25:49.887936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.439 [2024-11-05 03:25:49.887947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.439 [2024-11-05 03:25:49.887948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.007 03:25:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:27.007 03:25:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:15:27.007 03:25:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:27.007 I/O targets: 00:15:27.007 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:27.007 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:27.007 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:27.007 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:27.007 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:27.007 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:27.007 00:15:27.007 00:15:27.007 CUnit - A unit testing framework for C - Version 2.1-3 00:15:27.007 http://cunit.sourceforge.net/ 00:15:27.007 00:15:27.007 00:15:27.007 Suite: bdevio tests on: nvme3n1 00:15:27.007 Test: blockdev write read block ...passed 00:15:27.007 Test: blockdev write zeroes read block ...passed 00:15:27.007 Test: blockdev write zeroes read no split ...passed 00:15:27.007 Test: blockdev write zeroes read split ...passed 00:15:27.007 Test: blockdev write zeroes read split partial ...passed 00:15:27.007 Test: blockdev reset ...passed 00:15:27.007 Test: blockdev write read 8 blocks ...passed 00:15:27.007 Test: blockdev write read size > 128k ...passed 00:15:27.007 Test: blockdev write read invalid size ...passed 00:15:27.007 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.007 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.007 Test: blockdev write read max offset ...passed 00:15:27.007 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.007 Test: blockdev writev readv 8 blocks ...passed 00:15:27.007 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.007 Test: blockdev writev readv block ...passed 00:15:27.007 Test: blockdev writev readv size > 128k ...passed 00:15:27.007 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.007 Test: blockdev comparev and writev ...passed 00:15:27.007 Test: blockdev nvme passthru rw ...passed 00:15:27.007 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.007 Test: blockdev nvme admin passthru ...passed 00:15:27.007 Test: blockdev copy ...passed 00:15:27.007 Suite: bdevio tests on: nvme2n3 00:15:27.008 Test: blockdev write read block ...passed 00:15:27.008 Test: blockdev write zeroes read block ...passed 00:15:27.008 Test: blockdev write zeroes read no split ...passed 00:15:27.267 Test: blockdev write zeroes read split ...passed 00:15:27.267 Test: blockdev write zeroes read split partial ...passed 00:15:27.267 Test: blockdev reset ...passed 00:15:27.267 Test: blockdev write read 8 blocks ...passed 00:15:27.267 Test: blockdev write read size > 128k ...passed 00:15:27.267 Test: blockdev write read invalid size ...passed 00:15:27.267 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.267 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.267 Test: blockdev write read max offset ...passed 00:15:27.267 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.267 Test: blockdev writev readv 8 blocks ...passed 00:15:27.267 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.267 Test: blockdev writev readv block ...passed 00:15:27.267 Test: blockdev writev readv size > 128k ...passed 00:15:27.267 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.267 Test: blockdev comparev and writev ...passed 00:15:27.267 Test: blockdev nvme passthru rw ...passed 00:15:27.267 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.267 Test: blockdev nvme admin passthru ...passed 00:15:27.267 Test: blockdev copy ...passed 00:15:27.267 Suite: bdevio tests on: nvme2n2 00:15:27.267 Test: blockdev write read block ...passed 00:15:27.267 Test: blockdev write zeroes read block ...passed 00:15:27.267 Test: blockdev write zeroes read no split ...passed 00:15:27.267 Test: blockdev write zeroes read split ...passed 00:15:27.267 Test: blockdev write zeroes read split partial ...passed 00:15:27.267 Test: blockdev reset ...passed 00:15:27.267 Test: blockdev write read 8 blocks ...passed 00:15:27.267 Test: blockdev write read size > 128k ...passed 00:15:27.267 Test: blockdev write read invalid size ...passed 00:15:27.267 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.267 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.267 Test: blockdev write read max offset ...passed 00:15:27.267 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.267 Test: blockdev writev readv 8 blocks ...passed 00:15:27.267 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.267 Test: blockdev writev readv block ...passed 00:15:27.267 Test: blockdev writev readv size > 128k ...passed 00:15:27.267 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.267 Test: blockdev comparev and writev ...passed 00:15:27.267 Test: blockdev nvme passthru rw ...passed 00:15:27.267 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.267 Test: blockdev nvme admin passthru ...passed 00:15:27.267 Test: blockdev copy ...passed 00:15:27.267 Suite: bdevio tests on: nvme2n1 00:15:27.267 Test: blockdev write read block ...passed 00:15:27.267 Test: blockdev write zeroes read block ...passed 00:15:27.267 Test: blockdev write zeroes read no split ...passed 00:15:27.267 Test: blockdev write zeroes read split ...passed 00:15:27.267 Test: blockdev write zeroes read split partial ...passed 00:15:27.267 Test: blockdev reset ...passed 00:15:27.267 Test: blockdev write read 8 blocks ...passed 00:15:27.267 Test: blockdev write read size > 128k ...passed 00:15:27.267 Test: blockdev write read invalid size ...passed 00:15:27.267 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.267 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.267 Test: blockdev write read max offset ...passed 00:15:27.267 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.267 Test: blockdev writev readv 8 blocks ...passed 00:15:27.267 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.267 Test: blockdev writev readv block ...passed 00:15:27.267 Test: blockdev writev readv size > 128k ...passed 00:15:27.267 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.267 Test: blockdev comparev and writev ...passed 00:15:27.267 Test: blockdev nvme passthru rw ...passed 00:15:27.267 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.267 Test: blockdev nvme admin passthru ...passed 00:15:27.267 Test: blockdev copy ...passed 00:15:27.267 Suite: bdevio tests on: nvme1n1 00:15:27.267 Test: blockdev write read block ...passed 00:15:27.267 Test: blockdev write zeroes read block ...passed 00:15:27.267 Test: blockdev write zeroes read no split ...passed 00:15:27.527 Test: blockdev write zeroes read split ...passed 00:15:27.527 Test: blockdev write zeroes read split partial ...passed 00:15:27.527 Test: blockdev reset ...passed 00:15:27.527 Test: blockdev write read 8 blocks ...passed 00:15:27.527 Test: blockdev write read size > 128k ...passed 00:15:27.527 Test: blockdev write read invalid size ...passed 00:15:27.527 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.527 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.527 Test: blockdev write read max offset ...passed 00:15:27.527 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.527 Test: blockdev writev readv 8 blocks ...passed 00:15:27.527 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.527 Test: blockdev writev readv block ...passed 00:15:27.527 Test: blockdev writev readv size > 128k ...passed 00:15:27.527 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.527 Test: blockdev comparev and writev ...passed 00:15:27.527 Test: blockdev nvme passthru rw ...passed 00:15:27.527 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.527 Test: blockdev nvme admin passthru ...passed 00:15:27.527 Test: blockdev copy ...passed 00:15:27.527 Suite: bdevio tests on: nvme0n1 00:15:27.527 Test: blockdev write read block ...passed 00:15:27.527 Test: blockdev write zeroes read block ...passed 00:15:27.527 Test: blockdev write zeroes read no split ...passed 00:15:27.527 Test: blockdev write zeroes read split ...passed 00:15:27.527 Test: blockdev write zeroes read split partial ...passed 00:15:27.527 Test: blockdev reset ...passed 00:15:27.527 Test: blockdev write read 8 blocks ...passed 00:15:27.527 Test: blockdev write read size > 128k ...passed 00:15:27.527 Test: blockdev write read invalid size ...passed 00:15:27.527 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.527 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.527 Test: blockdev write read max offset ...passed 00:15:27.527 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.527 Test: blockdev writev readv 8 blocks ...passed 00:15:27.527 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.527 Test: blockdev writev readv block ...passed 00:15:27.527 Test: blockdev writev readv size > 128k ...passed 00:15:27.527 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.527 Test: blockdev comparev and writev ...passed 00:15:27.527 Test: blockdev nvme passthru rw ...passed 00:15:27.527 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.527 Test: blockdev nvme admin passthru ...passed 00:15:27.527 Test: blockdev copy ...passed 00:15:27.527 00:15:27.527 Run Summary: Type Total Ran Passed Failed Inactive 00:15:27.527 suites 6 6 n/a 0 0 00:15:27.527 tests 138 138 138 0 0 00:15:27.527 asserts 780 780 780 0 n/a 00:15:27.527 00:15:27.527 Elapsed time = 1.345 seconds 00:15:27.527 0 00:15:27.527 03:25:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71236 00:15:27.527 03:25:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 71236 ']' 00:15:27.527 03:25:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 71236 00:15:27.527 03:25:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:15:27.527 03:25:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:27.527 03:25:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71236 00:15:27.527 03:25:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:27.527 03:25:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:27.527 03:25:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71236' 00:15:27.527 killing process with pid 71236 00:15:27.527 03:25:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 71236 00:15:27.527 03:25:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 71236 00:15:28.907 03:25:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:28.907 00:15:28.907 real 0m2.685s 00:15:28.907 user 0m6.568s 00:15:28.907 sys 0m0.414s 00:15:28.907 03:25:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:28.907 03:25:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:28.907 ************************************ 00:15:28.907 END TEST bdev_bounds 00:15:28.907 ************************************ 00:15:28.908 03:25:52 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:28.908 03:25:52 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:28.908 03:25:52 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:28.908 03:25:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 ************************************ 00:15:28.908 START TEST bdev_nbd 00:15:28.908 ************************************ 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71296 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71296 /var/tmp/spdk-nbd.sock 00:15:28.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 71296 ']' 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:28.908 03:25:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 [2024-11-05 03:25:52.348500] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:15:28.908 [2024-11-05 03:25:52.348866] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.168 [2024-11-05 03:25:52.531229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.168 [2024-11-05 03:25:52.641429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.736 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:29.736 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:15:29.736 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:29.736 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:29.736 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:29.736 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:29.736 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:29.736 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:29.736 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:29.736 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:29.736 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:29.736 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:29.736 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:29.736 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:29.737 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.996 1+0 records in 00:15:29.996 1+0 records out 00:15:29.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581505 s, 7.0 MB/s 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:29.996 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:30.255 1+0 records in 00:15:30.255 1+0 records out 00:15:30.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000861817 s, 4.8 MB/s 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:30.255 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:30.515 1+0 records in 00:15:30.515 1+0 records out 00:15:30.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780115 s, 5.3 MB/s 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:30.515 03:25:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:15:30.774 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:30.774 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:30.774 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:30.774 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:15:30.774 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:30.774 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:30.774 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:30.774 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:15:30.774 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:30.774 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:30.774 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:30.774 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:30.774 1+0 records in 00:15:30.774 1+0 records out 00:15:30.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072263 s, 5.7 MB/s 00:15:30.775 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.775 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:30.775 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.775 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:30.775 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:30.775 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:30.775 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:30.775 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:31.033 1+0 records in 00:15:31.033 1+0 records out 00:15:31.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710633 s, 5.8 MB/s 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:31.033 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:31.034 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:31.293 1+0 records in 00:15:31.293 1+0 records out 00:15:31.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000932285 s, 4.4 MB/s 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:31.293 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:31.552 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:31.552 { 00:15:31.552 "nbd_device": "/dev/nbd0", 00:15:31.552 "bdev_name": "nvme0n1" 00:15:31.552 }, 00:15:31.552 { 00:15:31.552 "nbd_device": "/dev/nbd1", 00:15:31.552 "bdev_name": "nvme1n1" 00:15:31.552 }, 00:15:31.552 { 00:15:31.552 "nbd_device": "/dev/nbd2", 00:15:31.552 "bdev_name": "nvme2n1" 00:15:31.552 }, 00:15:31.552 { 00:15:31.552 "nbd_device": "/dev/nbd3", 00:15:31.552 "bdev_name": "nvme2n2" 00:15:31.552 }, 00:15:31.552 { 00:15:31.552 "nbd_device": "/dev/nbd4", 00:15:31.552 "bdev_name": "nvme2n3" 00:15:31.552 }, 00:15:31.552 { 00:15:31.552 "nbd_device": "/dev/nbd5", 00:15:31.552 "bdev_name": "nvme3n1" 00:15:31.552 } 00:15:31.552 ]' 00:15:31.552 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:31.552 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:31.552 { 00:15:31.552 "nbd_device": "/dev/nbd0", 00:15:31.552 "bdev_name": "nvme0n1" 00:15:31.552 }, 00:15:31.552 { 00:15:31.552 "nbd_device": "/dev/nbd1", 00:15:31.552 "bdev_name": "nvme1n1" 00:15:31.552 }, 00:15:31.552 { 00:15:31.552 "nbd_device": "/dev/nbd2", 00:15:31.552 "bdev_name": "nvme2n1" 00:15:31.552 }, 00:15:31.552 { 00:15:31.552 "nbd_device": "/dev/nbd3", 00:15:31.552 "bdev_name": "nvme2n2" 00:15:31.552 }, 00:15:31.552 { 00:15:31.552 "nbd_device": "/dev/nbd4", 00:15:31.552 "bdev_name": "nvme2n3" 00:15:31.552 }, 00:15:31.552 { 00:15:31.552 "nbd_device": "/dev/nbd5", 00:15:31.552 "bdev_name": "nvme3n1" 00:15:31.552 } 00:15:31.552 ]' 00:15:31.552 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:31.552 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:31.552 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.552 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:31.552 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:31.552 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:31.552 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.552 03:25:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:31.811 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:31.811 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:31.811 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:31.811 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.811 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.811 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:31.811 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.811 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.811 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.811 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:32.071 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:32.071 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:32.071 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:32.071 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.071 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.071 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:32.071 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:32.071 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.071 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.071 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.330 03:25:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:32.589 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:32.589 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:32.589 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:32.589 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.589 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.589 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:32.589 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:32.589 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.589 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.589 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:32.848 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:32.848 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:32.848 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:32.848 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.848 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.848 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:32.848 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:32.848 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.848 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:32.848 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.848 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:33.107 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:33.366 /dev/nbd0 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.366 1+0 records in 00:15:33.366 1+0 records out 00:15:33.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729084 s, 5.6 MB/s 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:33.366 03:25:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:33.626 /dev/nbd1 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.626 1+0 records in 00:15:33.626 1+0 records out 00:15:33.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688196 s, 6.0 MB/s 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:33.626 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:15:33.885 /dev/nbd10 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.885 1+0 records in 00:15:33.885 1+0 records out 00:15:33.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716751 s, 5.7 MB/s 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:33.885 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:15:34.144 /dev/nbd11 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.144 1+0 records in 00:15:34.144 1+0 records out 00:15:34.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804726 s, 5.1 MB/s 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:34.144 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:15:34.404 /dev/nbd12 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.404 1+0 records in 00:15:34.404 1+0 records out 00:15:34.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758989 s, 5.4 MB/s 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:34.404 03:25:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:34.663 /dev/nbd13 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.663 1+0 records in 00:15:34.663 1+0 records out 00:15:34.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704072 s, 5.8 MB/s 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.663 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:34.922 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:34.922 { 00:15:34.922 "nbd_device": "/dev/nbd0", 00:15:34.922 "bdev_name": "nvme0n1" 00:15:34.922 }, 00:15:34.922 { 00:15:34.922 "nbd_device": "/dev/nbd1", 00:15:34.922 "bdev_name": "nvme1n1" 00:15:34.922 }, 00:15:34.922 { 00:15:34.922 "nbd_device": "/dev/nbd10", 00:15:34.922 "bdev_name": "nvme2n1" 00:15:34.922 }, 00:15:34.922 { 00:15:34.922 "nbd_device": "/dev/nbd11", 00:15:34.922 "bdev_name": "nvme2n2" 00:15:34.922 }, 00:15:34.922 { 00:15:34.922 "nbd_device": "/dev/nbd12", 00:15:34.922 "bdev_name": "nvme2n3" 00:15:34.922 }, 00:15:34.922 { 00:15:34.922 "nbd_device": "/dev/nbd13", 00:15:34.922 "bdev_name": "nvme3n1" 00:15:34.922 } 00:15:34.922 ]' 00:15:34.922 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:34.922 { 00:15:34.922 "nbd_device": "/dev/nbd0", 00:15:34.922 "bdev_name": "nvme0n1" 00:15:34.922 }, 00:15:34.922 { 00:15:34.922 "nbd_device": "/dev/nbd1", 00:15:34.922 "bdev_name": "nvme1n1" 00:15:34.922 }, 00:15:34.922 { 00:15:34.922 "nbd_device": "/dev/nbd10", 00:15:34.922 "bdev_name": "nvme2n1" 00:15:34.922 }, 00:15:34.922 { 00:15:34.922 "nbd_device": "/dev/nbd11", 00:15:34.922 "bdev_name": "nvme2n2" 00:15:34.922 }, 00:15:34.922 { 00:15:34.922 "nbd_device": "/dev/nbd12", 00:15:34.922 "bdev_name": "nvme2n3" 00:15:34.923 }, 00:15:34.923 { 00:15:34.923 "nbd_device": "/dev/nbd13", 00:15:34.923 "bdev_name": "nvme3n1" 00:15:34.923 } 00:15:34.923 ]' 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:34.923 /dev/nbd1 00:15:34.923 /dev/nbd10 00:15:34.923 /dev/nbd11 00:15:34.923 /dev/nbd12 00:15:34.923 /dev/nbd13' 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:34.923 /dev/nbd1 00:15:34.923 /dev/nbd10 00:15:34.923 /dev/nbd11 00:15:34.923 /dev/nbd12 00:15:34.923 /dev/nbd13' 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:34.923 256+0 records in 00:15:34.923 256+0 records out 00:15:34.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129568 s, 80.9 MB/s 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:34.923 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:35.182 256+0 records in 00:15:35.182 256+0 records out 00:15:35.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130083 s, 8.1 MB/s 00:15:35.182 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:35.182 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:35.182 256+0 records in 00:15:35.182 256+0 records out 00:15:35.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151799 s, 6.9 MB/s 00:15:35.182 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:35.182 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:35.441 256+0 records in 00:15:35.441 256+0 records out 00:15:35.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123129 s, 8.5 MB/s 00:15:35.441 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:35.441 03:25:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:35.441 256+0 records in 00:15:35.441 256+0 records out 00:15:35.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1266 s, 8.3 MB/s 00:15:35.700 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:35.700 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:35.700 256+0 records in 00:15:35.700 256+0 records out 00:15:35.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123744 s, 8.5 MB/s 00:15:35.701 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:35.701 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:35.960 256+0 records in 00:15:35.960 256+0 records out 00:15:35.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128522 s, 8.2 MB/s 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.960 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:36.227 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:36.227 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:36.227 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:36.227 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.227 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.227 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:36.227 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:36.227 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.227 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.227 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:36.227 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:36.495 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:36.495 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:36.495 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.495 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.495 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:36.495 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:36.495 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.495 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.495 03:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:36.496 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:36.496 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:36.496 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:36.496 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.496 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.496 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:36.496 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:36.496 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.496 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.496 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:36.755 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:36.755 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:36.755 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:36.755 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.755 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.755 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:36.755 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:36.755 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.755 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.755 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:37.014 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:37.014 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:37.014 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:37.014 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.014 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.014 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:37.014 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:37.014 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.014 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.014 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:37.273 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:37.273 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:37.273 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:37.273 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.273 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.273 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:37.273 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:37.273 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.273 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:37.273 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.273 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:37.532 03:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:37.791 malloc_lvol_verify 00:15:37.791 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:38.050 90146116-aa31-451a-8a4d-49b7f458f663 00:15:38.050 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:38.050 03bd1bca-21d6-4f0b-8535-ce89fab85ab3 00:15:38.309 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:38.309 /dev/nbd0 00:15:38.309 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:38.309 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:38.309 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:38.309 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:38.309 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:38.309 mke2fs 1.47.0 (5-Feb-2023) 00:15:38.309 Discarding device blocks: 0/4096 done 00:15:38.309 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:38.309 00:15:38.309 Allocating group tables: 0/1 done 00:15:38.309 Writing inode tables: 0/1 done 00:15:38.309 Creating journal (1024 blocks): done 00:15:38.309 Writing superblocks and filesystem accounting information: 0/1 done 00:15:38.309 00:15:38.309 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:38.309 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:38.309 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:38.309 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.309 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:38.309 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.309 03:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71296 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 71296 ']' 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 71296 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71296 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:38.569 killing process with pid 71296 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71296' 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 71296 00:15:38.569 03:26:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 71296 00:15:39.971 03:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:39.971 00:15:39.971 real 0m11.142s 00:15:39.971 user 0m14.213s 00:15:39.971 sys 0m4.767s 00:15:39.971 03:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:39.971 03:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:39.971 ************************************ 00:15:39.971 END TEST bdev_nbd 00:15:39.971 ************************************ 00:15:39.971 03:26:03 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:39.971 03:26:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:15:39.971 03:26:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:15:39.971 03:26:03 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:39.971 03:26:03 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:39.971 03:26:03 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:39.971 03:26:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.971 ************************************ 00:15:39.971 START TEST bdev_fio 00:15:39.971 ************************************ 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:39.971 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:39.971 03:26:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:40.231 ************************************ 00:15:40.231 START TEST bdev_fio_rw_verify 00:15:40.231 ************************************ 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:40.231 03:26:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:40.231 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:40.231 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:40.231 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:40.231 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:40.231 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:40.231 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:40.231 fio-3.35 00:15:40.231 Starting 6 threads 00:15:52.435 00:15:52.435 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=71708: Tue Nov 5 03:26:14 2024 00:15:52.435 read: IOPS=33.8k, BW=132MiB/s (138MB/s)(1319MiB/10001msec) 00:15:52.435 slat (usec): min=2, max=1625, avg= 6.29, stdev= 5.09 00:15:52.435 clat (usec): min=111, max=6496, avg=570.23, stdev=192.46 00:15:52.435 lat (usec): min=120, max=6510, avg=576.52, stdev=193.20 00:15:52.435 clat percentiles (usec): 00:15:52.435 | 50.000th=[ 603], 99.000th=[ 1057], 99.900th=[ 1893], 99.990th=[ 4113], 00:15:52.435 | 99.999th=[ 6456] 00:15:52.435 write: IOPS=34.0k, BW=133MiB/s (139MB/s)(1327MiB/10001msec); 0 zone resets 00:15:52.435 slat (usec): min=10, max=2059, avg=20.55, stdev=23.54 00:15:52.435 clat (usec): min=75, max=6368, avg=639.35, stdev=201.58 00:15:52.435 lat (usec): min=90, max=6393, avg=659.90, stdev=204.48 00:15:52.435 clat percentiles (usec): 00:15:52.435 | 50.000th=[ 652], 99.000th=[ 1287], 99.900th=[ 2040], 99.990th=[ 2900], 00:15:52.435 | 99.999th=[ 6259] 00:15:52.435 bw ( KiB/s): min=112943, max=154680, per=99.66%, avg=135445.26, stdev=2040.08, samples=114 00:15:52.435 iops : min=28235, max=38670, avg=33861.16, stdev=510.02, samples=114 00:15:52.435 lat (usec) : 100=0.01%, 250=4.33%, 500=19.61%, 750=62.71%, 1000=10.90% 00:15:52.435 lat (msec) : 2=2.36%, 4=0.08%, 10=0.01% 00:15:52.435 cpu : usr=59.60%, sys=28.61%, ctx=7955, majf=0, minf=27884 00:15:52.435 IO depths : 1=12.1%, 2=24.6%, 4=50.5%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:52.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.435 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.435 issued rwts: total=337600,339794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:52.435 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:52.435 00:15:52.435 Run status group 0 (all jobs): 00:15:52.435 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=1319MiB (1383MB), run=10001-10001msec 00:15:52.435 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=1327MiB (1392MB), run=10001-10001msec 00:15:52.695 ----------------------------------------------------- 00:15:52.695 Suppressions used: 00:15:52.695 count bytes template 00:15:52.695 6 48 /usr/src/fio/parse.c 00:15:52.695 1949 187104 /usr/src/fio/iolog.c 00:15:52.695 1 8 libtcmalloc_minimal.so 00:15:52.695 1 904 libcrypto.so 00:15:52.695 ----------------------------------------------------- 00:15:52.695 00:15:52.695 00:15:52.695 real 0m12.609s 00:15:52.695 user 0m37.761s 00:15:52.695 sys 0m17.659s 00:15:52.695 ************************************ 00:15:52.695 END TEST bdev_fio_rw_verify 00:15:52.695 ************************************ 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:52.695 03:26:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "705c81a3-aa6d-4912-a31c-b4343361ace6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "705c81a3-aa6d-4912-a31c-b4343361ace6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2abadaf4-edbb-4876-95b7-739340a9b1e5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2abadaf4-edbb-4876-95b7-739340a9b1e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2d59f524-8722-4e0b-8c79-ebe9ed5c90c4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2d59f524-8722-4e0b-8c79-ebe9ed5c90c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "435139e5-6736-41f5-8647-fa466df0f040"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "435139e5-6736-41f5-8647-fa466df0f040",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "f4fda5b3-65c3-43c7-97d2-b4ae9ddfca5d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f4fda5b3-65c3-43c7-97d2-b4ae9ddfca5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a7ed5202-ad20-4d32-a197-843d5215b6e5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a7ed5202-ad20-4d32-a197-843d5215b6e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:52.955 03:26:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:52.955 03:26:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:52.955 /home/vagrant/spdk_repo/spdk 00:15:52.955 03:26:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:52.955 03:26:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:52.955 03:26:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:52.955 00:15:52.955 real 0m12.827s 00:15:52.955 user 0m37.886s 00:15:52.955 sys 0m17.758s 00:15:52.955 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:52.955 03:26:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:52.955 ************************************ 00:15:52.955 END TEST bdev_fio 00:15:52.955 ************************************ 00:15:52.955 03:26:16 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:52.955 03:26:16 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:52.955 03:26:16 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:15:52.955 03:26:16 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:52.955 03:26:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.955 ************************************ 00:15:52.955 START TEST bdev_verify 00:15:52.955 ************************************ 00:15:52.955 03:26:16 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:52.955 [2024-11-05 03:26:16.453925] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:15:52.955 [2024-11-05 03:26:16.454055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71883 ] 00:15:53.214 [2024-11-05 03:26:16.635342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:53.214 [2024-11-05 03:26:16.747928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.214 [2024-11-05 03:26:16.747961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.780 Running I/O for 5 seconds... 00:15:56.099 24064.00 IOPS, 94.00 MiB/s [2024-11-05T03:26:20.615Z] 24528.00 IOPS, 95.81 MiB/s [2024-11-05T03:26:21.551Z] 24266.67 IOPS, 94.79 MiB/s [2024-11-05T03:26:22.496Z] 24240.00 IOPS, 94.69 MiB/s [2024-11-05T03:26:22.496Z] 24083.20 IOPS, 94.08 MiB/s 00:15:58.912 Latency(us) 00:15:58.912 [2024-11-05T03:26:22.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.912 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:58.912 Verification LBA range: start 0x0 length 0xa0000 00:15:58.912 nvme0n1 : 5.03 1856.90 7.25 0.00 0.00 68819.17 9738.28 66957.26 00:15:58.912 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:58.912 Verification LBA range: start 0xa0000 length 0xa0000 00:15:58.912 nvme0n1 : 5.05 1773.93 6.93 0.00 0.00 72041.00 14317.91 62325.00 00:15:58.912 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:58.912 Verification LBA range: start 0x0 length 0xbd0bd 00:15:58.912 nvme1n1 : 5.06 2884.17 11.27 0.00 0.00 44211.64 5869.29 53060.47 00:15:58.912 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:58.912 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:58.912 nvme1n1 : 5.04 2739.02 10.70 0.00 0.00 46535.87 5527.13 60219.42 00:15:58.912 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:58.912 Verification LBA range: start 0x0 length 0x80000 00:15:58.912 nvme2n1 : 5.07 1868.05 7.30 0.00 0.00 67971.94 9159.25 60640.54 00:15:58.912 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:58.912 Verification LBA range: start 0x80000 length 0x80000 00:15:58.912 nvme2n1 : 5.04 1802.92 7.04 0.00 0.00 70670.98 7790.62 62325.00 00:15:58.912 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:58.912 Verification LBA range: start 0x0 length 0x80000 00:15:58.912 nvme2n2 : 5.07 1866.62 7.29 0.00 0.00 67868.94 7369.51 57271.62 00:15:58.912 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:58.912 Verification LBA range: start 0x80000 length 0x80000 00:15:58.912 nvme2n2 : 5.04 1777.06 6.94 0.00 0.00 71477.88 8474.94 60640.54 00:15:58.912 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:58.912 Verification LBA range: start 0x0 length 0x80000 00:15:58.912 nvme2n3 : 5.08 1866.16 7.29 0.00 0.00 67779.04 6369.36 61903.88 00:15:58.912 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:58.912 Verification LBA range: start 0x80000 length 0x80000 00:15:58.912 nvme2n3 : 5.05 1773.31 6.93 0.00 0.00 71488.35 11422.74 61903.88 00:15:58.912 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:58.912 Verification LBA range: start 0x0 length 0x20000 00:15:58.912 nvme3n1 : 5.08 1864.99 7.29 0.00 0.00 67771.96 6500.96 66957.26 00:15:58.912 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:58.912 Verification LBA range: start 0x20000 length 0x20000 00:15:58.912 nvme3n1 : 5.06 1794.79 7.01 0.00 0.00 70522.26 3684.76 66115.03 00:15:58.912 [2024-11-05T03:26:22.496Z] =================================================================================================================== 00:15:58.912 [2024-11-05T03:26:22.496Z] Total : 23867.91 93.23 0.00 0.00 63894.34 3684.76 66957.26 00:16:00.290 00:16:00.290 real 0m7.114s 00:16:00.290 user 0m10.848s 00:16:00.290 sys 0m2.047s 00:16:00.290 03:26:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:00.290 03:26:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:00.290 ************************************ 00:16:00.290 END TEST bdev_verify 00:16:00.290 ************************************ 00:16:00.290 03:26:23 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:00.290 03:26:23 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:16:00.290 03:26:23 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:00.290 03:26:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.290 ************************************ 00:16:00.290 START TEST bdev_verify_big_io 00:16:00.290 ************************************ 00:16:00.290 03:26:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:00.290 [2024-11-05 03:26:23.646781] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:16:00.290 [2024-11-05 03:26:23.646900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71983 ] 00:16:00.290 [2024-11-05 03:26:23.830028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:00.549 [2024-11-05 03:26:23.947416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.549 [2024-11-05 03:26:23.947461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.116 Running I/O for 5 seconds... 00:16:06.187 2177.00 IOPS, 136.06 MiB/s [2024-11-05T03:26:30.340Z] 2819.50 IOPS, 176.22 MiB/s [2024-11-05T03:26:30.599Z] 3316.33 IOPS, 207.27 MiB/s [2024-11-05T03:26:30.885Z] 3002.50 IOPS, 187.66 MiB/s 00:16:07.301 Latency(us) 00:16:07.301 [2024-11-05T03:26:30.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.301 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:07.301 Verification LBA range: start 0x0 length 0xa000 00:16:07.301 nvme0n1 : 5.82 131.87 8.24 0.00 0.00 936529.34 5579.77 1408208.09 00:16:07.301 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:07.301 Verification LBA range: start 0xa000 length 0xa000 00:16:07.301 nvme0n1 : 5.59 240.59 15.04 0.00 0.00 514193.67 5632.41 599667.56 00:16:07.301 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:07.301 Verification LBA range: start 0x0 length 0xbd0b 00:16:07.301 nvme1n1 : 5.78 102.42 6.40 0.00 0.00 1134713.55 40216.47 1738362.14 00:16:07.301 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:07.301 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:07.301 nvme1n1 : 5.61 176.88 11.06 0.00 0.00 690215.64 40637.58 1105005.39 00:16:07.301 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:07.301 Verification LBA range: start 0x0 length 0x8000 00:16:07.301 nvme2n1 : 5.83 91.15 5.70 0.00 0.00 1231364.04 64430.57 2870318.88 00:16:07.301 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:07.301 Verification LBA range: start 0x8000 length 0x8000 00:16:07.301 nvme2n1 : 5.60 205.69 12.86 0.00 0.00 591131.21 79590.71 781589.18 00:16:07.301 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:07.301 Verification LBA range: start 0x0 length 0x8000 00:16:07.301 nvme2n2 : 5.88 95.27 5.95 0.00 0.00 1136783.77 49691.55 3045502.66 00:16:07.301 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:07.301 Verification LBA range: start 0x8000 length 0x8000 00:16:07.301 nvme2n2 : 5.60 217.02 13.56 0.00 0.00 550949.21 11896.49 731055.40 00:16:07.301 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:07.301 Verification LBA range: start 0x0 length 0x8000 00:16:07.301 nvme2n3 : 6.05 180.46 11.28 0.00 0.00 577629.65 20318.79 2075254.03 00:16:07.301 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:07.301 Verification LBA range: start 0x8000 length 0x8000 00:16:07.301 nvme2n3 : 5.61 211.21 13.20 0.00 0.00 556317.27 73695.10 761375.67 00:16:07.301 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:07.301 Verification LBA range: start 0x0 length 0x2000 00:16:07.301 nvme3n1 : 6.23 254.45 15.90 0.00 0.00 396762.24 654.70 1206072.96 00:16:07.301 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:07.301 Verification LBA range: start 0x2000 length 0x2000 00:16:07.301 nvme3n1 : 5.61 296.59 18.54 0.00 0.00 391624.77 10264.67 579454.05 00:16:07.301 [2024-11-05T03:26:30.886Z] =================================================================================================================== 00:16:07.302 [2024-11-05T03:26:30.886Z] Total : 2203.60 137.72 0.00 0.00 629209.87 654.70 3045502.66 00:16:08.680 00:16:08.680 real 0m8.577s 00:16:08.680 user 0m15.521s 00:16:08.680 sys 0m0.650s 00:16:08.680 03:26:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:08.680 03:26:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.680 ************************************ 00:16:08.680 END TEST bdev_verify_big_io 00:16:08.680 ************************************ 00:16:08.680 03:26:32 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:08.680 03:26:32 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:08.680 03:26:32 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:08.680 03:26:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:08.680 ************************************ 00:16:08.680 START TEST bdev_write_zeroes 00:16:08.680 ************************************ 00:16:08.680 03:26:32 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:08.939 [2024-11-05 03:26:32.302977] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:16:08.939 [2024-11-05 03:26:32.303096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72098 ] 00:16:08.939 [2024-11-05 03:26:32.484611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.198 [2024-11-05 03:26:32.600618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.766 Running I/O for 1 seconds... 00:16:10.704 52768.00 IOPS, 206.12 MiB/s 00:16:10.705 Latency(us) 00:16:10.705 [2024-11-05T03:26:34.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.705 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:10.705 nvme0n1 : 1.02 8179.18 31.95 0.00 0.00 15635.49 6737.84 29267.48 00:16:10.705 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:10.705 nvme1n1 : 1.02 11783.30 46.03 0.00 0.00 10844.53 3816.35 22740.20 00:16:10.705 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:10.705 nvme2n1 : 1.03 8198.85 32.03 0.00 0.00 15477.07 4974.42 30951.94 00:16:10.705 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:10.705 nvme2n2 : 1.03 8111.44 31.69 0.00 0.00 15634.63 6737.84 29267.48 00:16:10.705 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:10.705 nvme2n3 : 1.03 8102.81 31.65 0.00 0.00 15640.66 6737.84 29478.04 00:16:10.705 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:10.705 nvme3n1 : 1.03 8094.32 31.62 0.00 0.00 15648.11 6711.52 29688.60 00:16:10.705 [2024-11-05T03:26:34.289Z] =================================================================================================================== 00:16:10.705 [2024-11-05T03:26:34.289Z] Total : 52469.90 204.96 0.00 0.00 14538.75 3816.35 30951.94 00:16:11.641 00:16:11.641 real 0m2.986s 00:16:11.641 user 0m2.194s 00:16:11.641 sys 0m0.587s 00:16:11.641 ************************************ 00:16:11.641 END TEST bdev_write_zeroes 00:16:11.641 ************************************ 00:16:11.641 03:26:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:11.641 03:26:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:11.900 03:26:35 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:11.900 03:26:35 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:11.900 03:26:35 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:11.900 03:26:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:11.900 ************************************ 00:16:11.900 START TEST bdev_json_nonenclosed 00:16:11.900 ************************************ 00:16:11.900 03:26:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:11.900 [2024-11-05 03:26:35.375090] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:16:11.900 [2024-11-05 03:26:35.375327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72153 ] 00:16:12.159 [2024-11-05 03:26:35.558969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.159 [2024-11-05 03:26:35.670148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.159 [2024-11-05 03:26:35.670474] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:12.160 [2024-11-05 03:26:35.670509] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:12.160 [2024-11-05 03:26:35.670523] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:12.419 00:16:12.419 real 0m0.656s 00:16:12.419 user 0m0.402s 00:16:12.419 sys 0m0.150s 00:16:12.419 03:26:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:12.419 03:26:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:12.419 ************************************ 00:16:12.419 END TEST bdev_json_nonenclosed 00:16:12.419 ************************************ 00:16:12.419 03:26:35 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.419 03:26:35 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:12.419 03:26:35 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:12.419 03:26:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.678 ************************************ 00:16:12.678 START TEST bdev_json_nonarray 00:16:12.678 ************************************ 00:16:12.678 03:26:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.678 [2024-11-05 03:26:36.105738] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:16:12.678 [2024-11-05 03:26:36.105860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72184 ] 00:16:12.937 [2024-11-05 03:26:36.289189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.937 [2024-11-05 03:26:36.404140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.937 [2024-11-05 03:26:36.404243] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:12.937 [2024-11-05 03:26:36.404267] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:12.937 [2024-11-05 03:26:36.404279] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:13.197 00:16:13.197 real 0m0.656s 00:16:13.197 user 0m0.395s 00:16:13.197 sys 0m0.156s 00:16:13.197 ************************************ 00:16:13.197 END TEST bdev_json_nonarray 00:16:13.197 ************************************ 00:16:13.197 03:26:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:13.197 03:26:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:13.197 03:26:36 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:13.197 03:26:36 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:13.197 03:26:36 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:13.197 03:26:36 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:13.197 03:26:36 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:13.197 03:26:36 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:13.197 03:26:36 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:13.197 03:26:36 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:13.197 03:26:36 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:13.197 03:26:36 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:13.197 03:26:36 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:13.197 03:26:36 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:14.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:16.037 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:16.037 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:16.037 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:16.037 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:16.295 00:16:16.295 real 1m2.542s 00:16:16.295 user 1m41.046s 00:16:16.295 sys 0m32.240s 00:16:16.295 03:26:39 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:16.295 ************************************ 00:16:16.295 END TEST blockdev_xnvme 00:16:16.295 ************************************ 00:16:16.295 03:26:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:16.295 03:26:39 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:16.295 03:26:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:16.295 03:26:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:16.295 03:26:39 -- common/autotest_common.sh@10 -- # set +x 00:16:16.295 ************************************ 00:16:16.295 START TEST ublk 00:16:16.295 ************************************ 00:16:16.295 03:26:39 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:16.295 * Looking for test storage... 00:16:16.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:16.295 03:26:39 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:16.295 03:26:39 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:16:16.295 03:26:39 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:16.554 03:26:39 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:16.554 03:26:39 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.554 03:26:39 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.554 03:26:39 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.554 03:26:39 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.554 03:26:39 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.554 03:26:39 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.554 03:26:39 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.554 03:26:39 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.554 03:26:39 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.554 03:26:39 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.554 03:26:39 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.554 03:26:39 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:16.554 03:26:39 ublk -- scripts/common.sh@345 -- # : 1 00:16:16.554 03:26:39 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.554 03:26:39 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.554 03:26:39 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:16.554 03:26:39 ublk -- scripts/common.sh@353 -- # local d=1 00:16:16.554 03:26:39 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.554 03:26:39 ublk -- scripts/common.sh@355 -- # echo 1 00:16:16.554 03:26:39 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:16.554 03:26:39 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:16.554 03:26:39 ublk -- scripts/common.sh@353 -- # local d=2 00:16:16.554 03:26:39 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.554 03:26:39 ublk -- scripts/common.sh@355 -- # echo 2 00:16:16.554 03:26:39 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:16.554 03:26:39 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:16.554 03:26:39 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:16.554 03:26:39 ublk -- scripts/common.sh@368 -- # return 0 00:16:16.554 03:26:39 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.554 03:26:39 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:16.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.554 --rc genhtml_branch_coverage=1 00:16:16.554 --rc genhtml_function_coverage=1 00:16:16.554 --rc genhtml_legend=1 00:16:16.554 --rc geninfo_all_blocks=1 00:16:16.554 --rc geninfo_unexecuted_blocks=1 00:16:16.554 00:16:16.554 ' 00:16:16.554 03:26:39 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:16.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.554 --rc genhtml_branch_coverage=1 00:16:16.554 --rc genhtml_function_coverage=1 00:16:16.554 --rc genhtml_legend=1 00:16:16.554 --rc geninfo_all_blocks=1 00:16:16.554 --rc geninfo_unexecuted_blocks=1 00:16:16.554 00:16:16.554 ' 00:16:16.554 03:26:39 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:16.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.554 --rc genhtml_branch_coverage=1 00:16:16.554 --rc genhtml_function_coverage=1 00:16:16.554 --rc genhtml_legend=1 00:16:16.554 --rc geninfo_all_blocks=1 00:16:16.554 --rc geninfo_unexecuted_blocks=1 00:16:16.554 00:16:16.554 ' 00:16:16.554 03:26:39 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:16.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.554 --rc genhtml_branch_coverage=1 00:16:16.554 --rc genhtml_function_coverage=1 00:16:16.554 --rc genhtml_legend=1 00:16:16.554 --rc geninfo_all_blocks=1 00:16:16.554 --rc geninfo_unexecuted_blocks=1 00:16:16.554 00:16:16.554 ' 00:16:16.554 03:26:39 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:16.554 03:26:39 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:16.554 03:26:39 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:16.554 03:26:39 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:16.554 03:26:39 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:16.554 03:26:39 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:16.554 03:26:39 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:16.554 03:26:39 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:16.554 03:26:39 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:16.554 03:26:39 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:16.554 03:26:39 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:16.554 03:26:39 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:16.554 03:26:39 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:16.554 03:26:39 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:16.554 03:26:39 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:16.554 03:26:39 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:16.554 03:26:39 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:16.554 03:26:39 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:16.554 03:26:39 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:16.554 03:26:39 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:16.554 03:26:39 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:16.554 03:26:39 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:16.554 03:26:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:16.554 ************************************ 00:16:16.554 START TEST test_save_ublk_config 00:16:16.554 ************************************ 00:16:16.554 03:26:39 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:16:16.554 03:26:39 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:16.555 03:26:39 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:16.555 03:26:39 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72486 00:16:16.555 03:26:39 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:16.555 03:26:39 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72486 00:16:16.555 03:26:39 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 72486 ']' 00:16:16.555 03:26:39 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.555 03:26:39 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:16.555 03:26:39 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.555 03:26:39 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:16.555 03:26:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:16.555 [2024-11-05 03:26:40.082116] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:16:16.555 [2024-11-05 03:26:40.082433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72486 ] 00:16:16.813 [2024-11-05 03:26:40.264209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.072 [2024-11-05 03:26:40.406598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.009 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:18.010 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:16:18.010 03:26:41 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:18.010 03:26:41 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:18.010 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.010 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:18.010 [2024-11-05 03:26:41.426324] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:18.010 [2024-11-05 03:26:41.427596] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:18.010 malloc0 00:16:18.010 [2024-11-05 03:26:41.521479] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:18.010 [2024-11-05 03:26:41.521603] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:18.010 [2024-11-05 03:26:41.521620] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:18.010 [2024-11-05 03:26:41.521631] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.010 [2024-11-05 03:26:41.529347] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.010 [2024-11-05 03:26:41.529380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.010 [2024-11-05 03:26:41.537332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.010 [2024-11-05 03:26:41.537474] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:18.010 [2024-11-05 03:26:41.561341] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.010 0 00:16:18.010 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.010 03:26:41 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:18.010 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.010 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:18.577 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.577 03:26:41 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:18.577 "subsystems": [ 00:16:18.577 { 00:16:18.577 "subsystem": "fsdev", 00:16:18.577 "config": [ 00:16:18.577 { 00:16:18.577 "method": "fsdev_set_opts", 00:16:18.577 "params": { 00:16:18.577 "fsdev_io_pool_size": 65535, 00:16:18.577 "fsdev_io_cache_size": 256 00:16:18.577 } 00:16:18.577 } 00:16:18.577 ] 00:16:18.577 }, 00:16:18.577 { 00:16:18.577 "subsystem": "keyring", 00:16:18.577 "config": [] 00:16:18.577 }, 00:16:18.577 { 00:16:18.577 "subsystem": "iobuf", 00:16:18.577 "config": [ 00:16:18.577 { 00:16:18.577 "method": "iobuf_set_options", 00:16:18.577 "params": { 00:16:18.577 "small_pool_count": 8192, 00:16:18.577 "large_pool_count": 1024, 00:16:18.577 "small_bufsize": 8192, 00:16:18.577 "large_bufsize": 135168, 00:16:18.577 "enable_numa": false 00:16:18.577 } 00:16:18.577 } 00:16:18.577 ] 00:16:18.577 }, 00:16:18.577 { 00:16:18.577 "subsystem": "sock", 00:16:18.577 "config": [ 00:16:18.577 { 00:16:18.577 "method": "sock_set_default_impl", 00:16:18.577 "params": { 00:16:18.577 "impl_name": "posix" 00:16:18.577 } 00:16:18.577 }, 00:16:18.577 { 00:16:18.577 "method": "sock_impl_set_options", 00:16:18.577 "params": { 00:16:18.577 "impl_name": "ssl", 00:16:18.577 "recv_buf_size": 4096, 00:16:18.577 "send_buf_size": 4096, 00:16:18.577 "enable_recv_pipe": true, 00:16:18.577 "enable_quickack": false, 00:16:18.577 "enable_placement_id": 0, 00:16:18.577 "enable_zerocopy_send_server": true, 00:16:18.577 "enable_zerocopy_send_client": false, 00:16:18.577 "zerocopy_threshold": 0, 00:16:18.577 "tls_version": 0, 00:16:18.577 "enable_ktls": false 00:16:18.577 } 00:16:18.577 }, 00:16:18.577 { 00:16:18.577 "method": "sock_impl_set_options", 00:16:18.577 "params": { 00:16:18.577 "impl_name": "posix", 00:16:18.577 "recv_buf_size": 2097152, 00:16:18.577 "send_buf_size": 2097152, 00:16:18.577 "enable_recv_pipe": true, 00:16:18.577 "enable_quickack": false, 00:16:18.577 "enable_placement_id": 0, 00:16:18.577 "enable_zerocopy_send_server": true, 00:16:18.577 "enable_zerocopy_send_client": false, 00:16:18.577 "zerocopy_threshold": 0, 00:16:18.577 "tls_version": 0, 00:16:18.577 "enable_ktls": false 00:16:18.577 } 00:16:18.577 } 00:16:18.577 ] 00:16:18.577 }, 00:16:18.577 { 00:16:18.577 "subsystem": "vmd", 00:16:18.577 "config": [] 00:16:18.577 }, 00:16:18.577 { 00:16:18.577 "subsystem": "accel", 00:16:18.577 "config": [ 00:16:18.577 { 00:16:18.577 "method": "accel_set_options", 00:16:18.577 "params": { 00:16:18.577 "small_cache_size": 128, 00:16:18.577 "large_cache_size": 16, 00:16:18.577 "task_count": 2048, 00:16:18.577 "sequence_count": 2048, 00:16:18.577 "buf_count": 2048 00:16:18.577 } 00:16:18.577 } 00:16:18.577 ] 00:16:18.577 }, 00:16:18.577 { 00:16:18.577 "subsystem": "bdev", 00:16:18.577 "config": [ 00:16:18.577 { 00:16:18.577 "method": "bdev_set_options", 00:16:18.577 "params": { 00:16:18.577 "bdev_io_pool_size": 65535, 00:16:18.577 "bdev_io_cache_size": 256, 00:16:18.577 "bdev_auto_examine": true, 00:16:18.577 "iobuf_small_cache_size": 128, 00:16:18.577 "iobuf_large_cache_size": 16 00:16:18.577 } 00:16:18.577 }, 00:16:18.577 { 00:16:18.577 "method": "bdev_raid_set_options", 00:16:18.577 "params": { 00:16:18.577 "process_window_size_kb": 1024, 00:16:18.577 "process_max_bandwidth_mb_sec": 0 00:16:18.577 } 00:16:18.577 }, 00:16:18.577 { 00:16:18.577 "method": "bdev_iscsi_set_options", 00:16:18.577 "params": { 00:16:18.577 "timeout_sec": 30 00:16:18.577 } 00:16:18.577 }, 00:16:18.577 { 00:16:18.577 "method": "bdev_nvme_set_options", 00:16:18.577 "params": { 00:16:18.577 "action_on_timeout": "none", 00:16:18.577 "timeout_us": 0, 00:16:18.577 "timeout_admin_us": 0, 00:16:18.577 "keep_alive_timeout_ms": 10000, 00:16:18.577 "arbitration_burst": 0, 00:16:18.577 "low_priority_weight": 0, 00:16:18.577 "medium_priority_weight": 0, 00:16:18.577 "high_priority_weight": 0, 00:16:18.577 "nvme_adminq_poll_period_us": 10000, 00:16:18.577 "nvme_ioq_poll_period_us": 0, 00:16:18.577 "io_queue_requests": 0, 00:16:18.577 "delay_cmd_submit": true, 00:16:18.577 "transport_retry_count": 4, 00:16:18.577 "bdev_retry_count": 3, 00:16:18.577 "transport_ack_timeout": 0, 00:16:18.577 "ctrlr_loss_timeout_sec": 0, 00:16:18.577 "reconnect_delay_sec": 0, 00:16:18.577 "fast_io_fail_timeout_sec": 0, 00:16:18.577 "disable_auto_failback": false, 00:16:18.577 "generate_uuids": false, 00:16:18.577 "transport_tos": 0, 00:16:18.577 "nvme_error_stat": false, 00:16:18.577 "rdma_srq_size": 0, 00:16:18.577 "io_path_stat": false, 00:16:18.577 "allow_accel_sequence": false, 00:16:18.577 "rdma_max_cq_size": 0, 00:16:18.577 "rdma_cm_event_timeout_ms": 0, 00:16:18.578 "dhchap_digests": [ 00:16:18.578 "sha256", 00:16:18.578 "sha384", 00:16:18.578 "sha512" 00:16:18.578 ], 00:16:18.578 "dhchap_dhgroups": [ 00:16:18.578 "null", 00:16:18.578 "ffdhe2048", 00:16:18.578 "ffdhe3072", 00:16:18.578 "ffdhe4096", 00:16:18.578 "ffdhe6144", 00:16:18.578 "ffdhe8192" 00:16:18.578 ] 00:16:18.578 } 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "method": "bdev_nvme_set_hotplug", 00:16:18.578 "params": { 00:16:18.578 "period_us": 100000, 00:16:18.578 "enable": false 00:16:18.578 } 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "method": "bdev_malloc_create", 00:16:18.578 "params": { 00:16:18.578 "name": "malloc0", 00:16:18.578 "num_blocks": 8192, 00:16:18.578 "block_size": 4096, 00:16:18.578 "physical_block_size": 4096, 00:16:18.578 "uuid": "7f6b57a1-b998-46a8-bf47-566fbf744a13", 00:16:18.578 "optimal_io_boundary": 0, 00:16:18.578 "md_size": 0, 00:16:18.578 "dif_type": 0, 00:16:18.578 "dif_is_head_of_md": false, 00:16:18.578 "dif_pi_format": 0 00:16:18.578 } 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "method": "bdev_wait_for_examine" 00:16:18.578 } 00:16:18.578 ] 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "subsystem": "scsi", 00:16:18.578 "config": null 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "subsystem": "scheduler", 00:16:18.578 "config": [ 00:16:18.578 { 00:16:18.578 "method": "framework_set_scheduler", 00:16:18.578 "params": { 00:16:18.578 "name": "static" 00:16:18.578 } 00:16:18.578 } 00:16:18.578 ] 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "subsystem": "vhost_scsi", 00:16:18.578 "config": [] 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "subsystem": "vhost_blk", 00:16:18.578 "config": [] 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "subsystem": "ublk", 00:16:18.578 "config": [ 00:16:18.578 { 00:16:18.578 "method": "ublk_create_target", 00:16:18.578 "params": { 00:16:18.578 "cpumask": "1" 00:16:18.578 } 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "method": "ublk_start_disk", 00:16:18.578 "params": { 00:16:18.578 "bdev_name": "malloc0", 00:16:18.578 "ublk_id": 0, 00:16:18.578 "num_queues": 1, 00:16:18.578 "queue_depth": 128 00:16:18.578 } 00:16:18.578 } 00:16:18.578 ] 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "subsystem": "nbd", 00:16:18.578 "config": [] 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "subsystem": "nvmf", 00:16:18.578 "config": [ 00:16:18.578 { 00:16:18.578 "method": "nvmf_set_config", 00:16:18.578 "params": { 00:16:18.578 "discovery_filter": "match_any", 00:16:18.578 "admin_cmd_passthru": { 00:16:18.578 "identify_ctrlr": false 00:16:18.578 }, 00:16:18.578 "dhchap_digests": [ 00:16:18.578 "sha256", 00:16:18.578 "sha384", 00:16:18.578 "sha512" 00:16:18.578 ], 00:16:18.578 "dhchap_dhgroups": [ 00:16:18.578 "null", 00:16:18.578 "ffdhe2048", 00:16:18.578 "ffdhe3072", 00:16:18.578 "ffdhe4096", 00:16:18.578 "ffdhe6144", 00:16:18.578 "ffdhe8192" 00:16:18.578 ] 00:16:18.578 } 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "method": "nvmf_set_max_subsystems", 00:16:18.578 "params": { 00:16:18.578 "max_subsystems": 1024 00:16:18.578 } 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "method": "nvmf_set_crdt", 00:16:18.578 "params": { 00:16:18.578 "crdt1": 0, 00:16:18.578 "crdt2": 0, 00:16:18.578 "crdt3": 0 00:16:18.578 } 00:16:18.578 } 00:16:18.578 ] 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "subsystem": "iscsi", 00:16:18.578 "config": [ 00:16:18.578 { 00:16:18.578 "method": "iscsi_set_options", 00:16:18.578 "params": { 00:16:18.578 "node_base": "iqn.2016-06.io.spdk", 00:16:18.578 "max_sessions": 128, 00:16:18.578 "max_connections_per_session": 2, 00:16:18.578 "max_queue_depth": 64, 00:16:18.578 "default_time2wait": 2, 00:16:18.578 "default_time2retain": 20, 00:16:18.578 "first_burst_length": 8192, 00:16:18.578 "immediate_data": true, 00:16:18.578 "allow_duplicated_isid": false, 00:16:18.578 "error_recovery_level": 0, 00:16:18.578 "nop_timeout": 60, 00:16:18.578 "nop_in_interval": 30, 00:16:18.578 "disable_chap": false, 00:16:18.578 "require_chap": false, 00:16:18.578 "mutual_chap": false, 00:16:18.578 "chap_group": 0, 00:16:18.578 "max_large_datain_per_connection": 64, 00:16:18.578 "max_r2t_per_connection": 4, 00:16:18.578 "pdu_pool_size": 36864, 00:16:18.578 "immediate_data_pool_size": 16384, 00:16:18.578 "data_out_pool_size": 2048 00:16:18.578 } 00:16:18.578 } 00:16:18.578 ] 00:16:18.578 } 00:16:18.578 ] 00:16:18.578 }' 00:16:18.578 03:26:41 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72486 00:16:18.578 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 72486 ']' 00:16:18.578 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 72486 00:16:18.578 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:16:18.578 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:18.578 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72486 00:16:18.578 killing process with pid 72486 00:16:18.578 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:18.578 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:18.578 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72486' 00:16:18.578 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 72486 00:16:18.578 03:26:41 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 72486 00:16:19.955 [2024-11-05 03:26:43.424800] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:19.955 [2024-11-05 03:26:43.457366] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:19.955 [2024-11-05 03:26:43.457552] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:19.955 [2024-11-05 03:26:43.465340] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:19.955 [2024-11-05 03:26:43.465413] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:19.955 [2024-11-05 03:26:43.465435] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:19.955 [2024-11-05 03:26:43.465472] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:19.955 [2024-11-05 03:26:43.465655] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:21.862 03:26:45 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72557 00:16:21.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.862 03:26:45 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72557 00:16:21.862 03:26:45 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 72557 ']' 00:16:21.862 03:26:45 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.862 03:26:45 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:21.862 03:26:45 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:21.862 03:26:45 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.862 03:26:45 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:21.862 03:26:45 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:21.862 "subsystems": [ 00:16:21.862 { 00:16:21.862 "subsystem": "fsdev", 00:16:21.862 "config": [ 00:16:21.862 { 00:16:21.862 "method": "fsdev_set_opts", 00:16:21.862 "params": { 00:16:21.862 "fsdev_io_pool_size": 65535, 00:16:21.862 "fsdev_io_cache_size": 256 00:16:21.862 } 00:16:21.862 } 00:16:21.862 ] 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "subsystem": "keyring", 00:16:21.862 "config": [] 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "subsystem": "iobuf", 00:16:21.862 "config": [ 00:16:21.862 { 00:16:21.862 "method": "iobuf_set_options", 00:16:21.862 "params": { 00:16:21.862 "small_pool_count": 8192, 00:16:21.862 "large_pool_count": 1024, 00:16:21.862 "small_bufsize": 8192, 00:16:21.862 "large_bufsize": 135168, 00:16:21.862 "enable_numa": false 00:16:21.862 } 00:16:21.862 } 00:16:21.862 ] 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "subsystem": "sock", 00:16:21.862 "config": [ 00:16:21.862 { 00:16:21.862 "method": "sock_set_default_impl", 00:16:21.862 "params": { 00:16:21.862 "impl_name": "posix" 00:16:21.862 } 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "method": "sock_impl_set_options", 00:16:21.862 "params": { 00:16:21.862 "impl_name": "ssl", 00:16:21.862 "recv_buf_size": 4096, 00:16:21.862 "send_buf_size": 4096, 00:16:21.862 "enable_recv_pipe": true, 00:16:21.862 "enable_quickack": false, 00:16:21.862 "enable_placement_id": 0, 00:16:21.862 "enable_zerocopy_send_server": true, 00:16:21.862 "enable_zerocopy_send_client": false, 00:16:21.862 "zerocopy_threshold": 0, 00:16:21.862 "tls_version": 0, 00:16:21.862 "enable_ktls": false 00:16:21.862 } 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "method": "sock_impl_set_options", 00:16:21.862 "params": { 00:16:21.862 "impl_name": "posix", 00:16:21.862 "recv_buf_size": 2097152, 00:16:21.862 "send_buf_size": 2097152, 00:16:21.862 "enable_recv_pipe": true, 00:16:21.862 "enable_quickack": false, 00:16:21.862 "enable_placement_id": 0, 00:16:21.862 "enable_zerocopy_send_server": true, 00:16:21.862 "enable_zerocopy_send_client": false, 00:16:21.862 "zerocopy_threshold": 0, 00:16:21.862 "tls_version": 0, 00:16:21.862 "enable_ktls": false 00:16:21.862 } 00:16:21.862 } 00:16:21.862 ] 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "subsystem": "vmd", 00:16:21.862 "config": [] 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "subsystem": "accel", 00:16:21.862 "config": [ 00:16:21.862 { 00:16:21.862 "method": "accel_set_options", 00:16:21.862 "params": { 00:16:21.862 "small_cache_size": 128, 00:16:21.862 "large_cache_size": 16, 00:16:21.862 "task_count": 2048, 00:16:21.862 "sequence_count": 2048, 00:16:21.862 "buf_count": 2048 00:16:21.862 } 00:16:21.862 } 00:16:21.862 ] 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "subsystem": "bdev", 00:16:21.862 "config": [ 00:16:21.862 { 00:16:21.862 "method": "bdev_set_options", 00:16:21.862 "params": { 00:16:21.862 "bdev_io_pool_size": 65535, 00:16:21.862 "bdev_io_cache_size": 256, 00:16:21.862 "bdev_auto_examine": true, 00:16:21.862 "iobuf_small_cache_size": 128, 00:16:21.862 "iobuf_large_cache_size": 16 00:16:21.862 } 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "method": "bdev_raid_set_options", 00:16:21.862 "params": { 00:16:21.862 "process_window_size_kb": 1024, 00:16:21.862 "process_max_bandwidth_mb_sec": 0 00:16:21.862 } 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "method": "bdev_iscsi_set_options", 00:16:21.862 "params": { 00:16:21.862 "timeout_sec": 30 00:16:21.862 } 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "method": "bdev_nvme_set_options", 00:16:21.862 "params": { 00:16:21.862 "action_on_timeout": "none", 00:16:21.862 "timeout_us": 0, 00:16:21.862 "timeout_admin_us": 0, 00:16:21.862 "keep_alive_timeout_ms": 10000, 00:16:21.862 "arbitration_burst": 0, 00:16:21.862 "low_priority_weight": 0, 00:16:21.862 "medium_priority_weight": 0, 00:16:21.862 "high_priority_weight": 0, 00:16:21.862 "nvme_adminq_poll_period_us": 10000, 00:16:21.862 "nvme_ioq_poll_period_us": 0, 00:16:21.862 "io_queue_requests": 0, 00:16:21.862 "delay_cmd_submit": true, 00:16:21.862 "transport_retry_count": 4, 00:16:21.862 "bdev_retry_count": 3, 00:16:21.862 "transport_ack_timeout": 0, 00:16:21.862 "ctrlr_loss_timeout_sec": 0, 00:16:21.862 "reconnect_delay_sec": 0, 00:16:21.862 "fast_io_fail_timeout_sec": 0, 00:16:21.862 "disable_auto_failback": false, 00:16:21.862 "generate_uuids": false, 00:16:21.862 "transport_tos": 0, 00:16:21.862 "nvme_error_stat": false, 00:16:21.862 "rdma_srq_size": 0, 00:16:21.862 "io_path_stat": false, 00:16:21.862 "allow_accel_sequence": false, 00:16:21.862 "rdma_max_cq_size": 0, 00:16:21.862 "rdma_cm_event_timeout_ms": 0, 00:16:21.862 "dhchap_digests": [ 00:16:21.862 "sha256", 00:16:21.862 "sha384", 00:16:21.862 "sha512" 00:16:21.862 ], 00:16:21.862 "dhchap_dhgroups": [ 00:16:21.862 "null", 00:16:21.862 "ffdhe2048", 00:16:21.862 "ffdhe3072", 00:16:21.862 "ffdhe4096", 00:16:21.862 "ffdhe6144", 00:16:21.862 "ffdhe8192" 00:16:21.862 ] 00:16:21.862 } 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "method": "bdev_nvme_set_hotplug", 00:16:21.862 "params": { 00:16:21.862 "period_us": 100000, 00:16:21.862 "enable": false 00:16:21.862 } 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "method": "bdev_malloc_create", 00:16:21.862 "params": { 00:16:21.862 "name": "malloc0", 00:16:21.862 "num_blocks": 8192, 00:16:21.862 "block_size": 4096, 00:16:21.862 "physical_block_size": 4096, 00:16:21.862 "uuid": "7f6b57a1-b998-46a8-bf47-566fbf744a13", 00:16:21.862 "optimal_io_boundary": 0, 00:16:21.862 "md_size": 0, 00:16:21.862 "dif_type": 0, 00:16:21.862 "dif_is_head_of_md": false, 00:16:21.862 "dif_pi_format": 0 00:16:21.862 } 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "method": "bdev_wait_for_examine" 00:16:21.862 } 00:16:21.862 ] 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "subsystem": "scsi", 00:16:21.862 "config": null 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "subsystem": "scheduler", 00:16:21.862 "config": [ 00:16:21.862 { 00:16:21.862 "method": "framework_set_scheduler", 00:16:21.862 "params": { 00:16:21.862 "name": "static" 00:16:21.862 } 00:16:21.862 } 00:16:21.862 ] 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "subsystem": "vhost_scsi", 00:16:21.862 "config": [] 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "subsystem": "vhost_blk", 00:16:21.862 "config": [] 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "subsystem": "ublk", 00:16:21.862 "config": [ 00:16:21.862 { 00:16:21.862 "method": "ublk_create_target", 00:16:21.862 "params": { 00:16:21.862 "cpumask": "1" 00:16:21.862 } 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "method": "ublk_start_disk", 00:16:21.862 "params": { 00:16:21.862 "bdev_name": "malloc0", 00:16:21.862 "ublk_id": 0, 00:16:21.862 "num_queues": 1, 00:16:21.862 "queue_depth": 128 00:16:21.862 } 00:16:21.862 } 00:16:21.862 ] 00:16:21.862 }, 00:16:21.862 { 00:16:21.862 "subsystem": "nbd", 00:16:21.862 "config": [] 00:16:21.862 }, 00:16:21.862 { 00:16:21.863 "subsystem": "nvmf", 00:16:21.863 "config": [ 00:16:21.863 { 00:16:21.863 "method": "nvmf_set_config", 00:16:21.863 "params": { 00:16:21.863 "discovery_filter": "match_any", 00:16:21.863 "admin_cmd_passthru": { 00:16:21.863 "identify_ctrlr": false 00:16:21.863 }, 00:16:21.863 "dhchap_digests": [ 00:16:21.863 "sha256", 00:16:21.863 "sha384", 00:16:21.863 "sha512" 00:16:21.863 ], 00:16:21.863 "dhchap_dhgroups": [ 00:16:21.863 "null", 00:16:21.863 "ffdhe2048", 00:16:21.863 "ffdhe3072", 00:16:21.863 "ffdhe4096", 00:16:21.863 "ffdhe6144", 00:16:21.863 "ffdhe8192" 00:16:21.863 ] 00:16:21.863 } 00:16:21.863 }, 00:16:21.863 { 00:16:21.863 "method": "nvmf_set_max_subsystems", 00:16:21.863 "params": { 00:16:21.863 "max_subsystems": 1024 00:16:21.863 } 00:16:21.863 }, 00:16:21.863 { 00:16:21.863 "method": "nvmf_set_crdt", 00:16:21.863 "params": { 00:16:21.863 "crdt1": 0, 00:16:21.863 "crdt2": 0, 00:16:21.863 "crdt3": 0 00:16:21.863 } 00:16:21.863 } 00:16:21.863 ] 00:16:21.863 }, 00:16:21.863 { 00:16:21.863 "subsystem": "iscsi", 00:16:21.863 "config": [ 00:16:21.863 { 00:16:21.863 "method": "iscsi_set_options", 00:16:21.863 "params": { 00:16:21.863 "node_base": "iqn.2016-06.io.spdk", 00:16:21.863 "max_sessions": 128, 00:16:21.863 "max_connections_per_session": 2, 00:16:21.863 "max_queue_depth": 64, 00:16:21.863 "default_time2wait": 2, 00:16:21.863 "default_time2retain": 20, 00:16:21.863 "first_burst_length": 8192, 00:16:21.863 "immediate_data": true, 00:16:21.863 "allow_duplicated_isid": false, 00:16:21.863 "error_recovery_level": 0, 00:16:21.863 "nop_timeout": 60, 00:16:21.863 "nop_in_interval": 30, 00:16:21.863 "disable_chap": false, 00:16:21.863 "require_chap": false, 00:16:21.863 "mutual_chap": false, 00:16:21.863 "chap_group": 0, 00:16:21.863 "max_large_datain_per_connection": 64, 00:16:21.863 "max_r2t_per_connection": 4, 00:16:21.863 "pdu_pool_size": 36864, 00:16:21.863 "immediate_data_pool_size": 16384, 00:16:21.863 "data_out_pool_size": 2048 00:16:21.863 } 00:16:21.863 } 00:16:21.863 ] 00:16:21.863 } 00:16:21.863 ] 00:16:21.863 }' 00:16:21.863 03:26:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:22.122 [2024-11-05 03:26:45.515927] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:16:22.122 [2024-11-05 03:26:45.516274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72557 ] 00:16:22.122 [2024-11-05 03:26:45.699779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.382 [2024-11-05 03:26:45.836396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.768 [2024-11-05 03:26:47.007327] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:23.768 [2024-11-05 03:26:47.008696] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:23.768 [2024-11-05 03:26:47.015486] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:23.768 [2024-11-05 03:26:47.015604] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:23.768 [2024-11-05 03:26:47.015621] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:23.768 [2024-11-05 03:26:47.015631] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:23.768 [2024-11-05 03:26:47.024432] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:23.768 [2024-11-05 03:26:47.024465] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:23.768 [2024-11-05 03:26:47.031341] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:23.768 [2024-11-05 03:26:47.031454] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:23.768 [2024-11-05 03:26:47.048328] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72557 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 72557 ']' 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 72557 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72557 00:16:23.768 killing process with pid 72557 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72557' 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 72557 00:16:23.768 03:26:47 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 72557 00:16:25.678 [2024-11-05 03:26:48.816751] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:25.678 [2024-11-05 03:26:48.853402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:25.678 [2024-11-05 03:26:48.853555] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:25.678 [2024-11-05 03:26:48.859326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:25.678 [2024-11-05 03:26:48.859394] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:25.678 [2024-11-05 03:26:48.859406] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:25.678 [2024-11-05 03:26:48.859441] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:25.678 [2024-11-05 03:26:48.859613] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:27.594 03:26:50 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:27.594 00:16:27.594 real 0m10.843s 00:16:27.594 user 0m8.150s 00:16:27.594 sys 0m3.431s 00:16:27.594 ************************************ 00:16:27.594 END TEST test_save_ublk_config 00:16:27.594 ************************************ 00:16:27.594 03:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:27.594 03:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:27.594 03:26:50 ublk -- ublk/ublk.sh@139 -- # spdk_pid=72649 00:16:27.594 03:26:50 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:27.594 03:26:50 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:27.594 03:26:50 ublk -- ublk/ublk.sh@141 -- # waitforlisten 72649 00:16:27.594 03:26:50 ublk -- common/autotest_common.sh@833 -- # '[' -z 72649 ']' 00:16:27.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.594 03:26:50 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.594 03:26:50 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:27.594 03:26:50 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.594 03:26:50 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:27.594 03:26:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:27.594 [2024-11-05 03:26:50.991332] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:16:27.594 [2024-11-05 03:26:50.991474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72649 ] 00:16:27.853 [2024-11-05 03:26:51.177922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:27.853 [2024-11-05 03:26:51.319006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.853 [2024-11-05 03:26:51.319036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.791 03:26:52 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:28.791 03:26:52 ublk -- common/autotest_common.sh@866 -- # return 0 00:16:28.791 03:26:52 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:28.791 03:26:52 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:28.791 03:26:52 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:28.791 03:26:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.791 ************************************ 00:16:28.791 START TEST test_create_ublk 00:16:28.791 ************************************ 00:16:28.791 03:26:52 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:16:28.791 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:28.791 03:26:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.791 03:26:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.791 [2024-11-05 03:26:52.349330] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:28.791 [2024-11-05 03:26:52.352623] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:28.791 03:26:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.791 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:28.791 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:28.791 03:26:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.791 03:26:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.359 03:26:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:29.360 03:26:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.360 03:26:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.360 [2024-11-05 03:26:52.685504] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:29.360 [2024-11-05 03:26:52.686049] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:29.360 [2024-11-05 03:26:52.686087] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:29.360 [2024-11-05 03:26:52.686099] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:29.360 [2024-11-05 03:26:52.693889] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:29.360 [2024-11-05 03:26:52.693925] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:29.360 [2024-11-05 03:26:52.701347] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:29.360 [2024-11-05 03:26:52.714396] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:29.360 [2024-11-05 03:26:52.729364] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:29.360 03:26:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:29.360 03:26:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.360 03:26:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.360 03:26:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:29.360 { 00:16:29.360 "ublk_device": "/dev/ublkb0", 00:16:29.360 "id": 0, 00:16:29.360 "queue_depth": 512, 00:16:29.360 "num_queues": 4, 00:16:29.360 "bdev_name": "Malloc0" 00:16:29.360 } 00:16:29.360 ]' 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:29.360 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:29.619 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:29.619 03:26:52 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:29.619 03:26:52 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:29.619 03:26:52 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:29.619 03:26:52 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:29.619 03:26:52 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:29.619 03:26:52 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:29.619 03:26:52 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:29.619 03:26:52 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:29.619 03:26:52 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:29.619 03:26:52 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:29.619 03:26:52 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:29.619 03:26:52 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:29.619 fio: verification read phase will never start because write phase uses all of runtime 00:16:29.619 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:29.619 fio-3.35 00:16:29.619 Starting 1 process 00:16:39.692 00:16:39.692 fio_test: (groupid=0, jobs=1): err= 0: pid=72707: Tue Nov 5 03:27:03 2024 00:16:39.692 write: IOPS=6204, BW=24.2MiB/s (25.4MB/s)(242MiB/10001msec); 0 zone resets 00:16:39.692 clat (usec): min=42, max=4044, avg=160.30, stdev=117.67 00:16:39.692 lat (usec): min=42, max=4044, avg=160.78, stdev=117.69 00:16:39.692 clat percentiles (usec): 00:16:39.692 | 1.00th=[ 46], 5.00th=[ 49], 10.00th=[ 137], 20.00th=[ 151], 00:16:39.692 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:16:39.692 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 184], 00:16:39.692 | 99.00th=[ 198], 99.50th=[ 206], 99.90th=[ 2442], 99.95th=[ 3064], 00:16:39.692 | 99.99th=[ 3654] 00:16:39.692 bw ( KiB/s): min=23296, max=48152, per=100.00%, avg=24930.11, stdev=5639.41, samples=19 00:16:39.692 iops : min= 5824, max=12038, avg=6232.53, stdev=1409.85, samples=19 00:16:39.692 lat (usec) : 50=6.14%, 100=1.82%, 250=91.72%, 500=0.04%, 750=0.02% 00:16:39.692 lat (usec) : 1000=0.02% 00:16:39.692 lat (msec) : 2=0.09%, 4=0.15%, 10=0.01% 00:16:39.692 cpu : usr=1.08%, sys=4.92%, ctx=62050, majf=0, minf=795 00:16:39.692 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:39.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.692 issued rwts: total=0,62050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.692 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:39.692 00:16:39.692 Run status group 0 (all jobs): 00:16:39.692 WRITE: bw=24.2MiB/s (25.4MB/s), 24.2MiB/s-24.2MiB/s (25.4MB/s-25.4MB/s), io=242MiB (254MB), run=10001-10001msec 00:16:39.692 00:16:39.692 Disk stats (read/write): 00:16:39.692 ublkb0: ios=0/61456, merge=0/0, ticks=0/9253, in_queue=9254, util=99.14% 00:16:39.692 03:27:03 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:39.692 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.692 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.692 [2024-11-05 03:27:03.229623] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.692 [2024-11-05 03:27:03.268378] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.692 [2024-11-05 03:27:03.269479] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.951 [2024-11-05 03:27:03.283321] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.951 [2024-11-05 03:27:03.283761] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:39.951 [2024-11-05 03:27:03.283895] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.951 03:27:03 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.951 [2024-11-05 03:27:03.299429] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:39.951 request: 00:16:39.951 { 00:16:39.951 "ublk_id": 0, 00:16:39.951 "method": "ublk_stop_disk", 00:16:39.951 "req_id": 1 00:16:39.951 } 00:16:39.951 Got JSON-RPC error response 00:16:39.951 response: 00:16:39.951 { 00:16:39.951 "code": -19, 00:16:39.951 "message": "No such device" 00:16:39.951 } 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:39.951 03:27:03 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.951 [2024-11-05 03:27:03.321460] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:39.951 [2024-11-05 03:27:03.331311] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:39.951 [2024-11-05 03:27:03.331367] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:39.951 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.951 03:27:03 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:39.952 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.952 03:27:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.889 03:27:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.889 03:27:04 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:40.889 03:27:04 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:40.889 03:27:04 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.889 03:27:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.889 03:27:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.889 03:27:04 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:40.889 03:27:04 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:40.889 03:27:04 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:40.889 03:27:04 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:40.889 03:27:04 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.889 03:27:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.889 03:27:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.889 03:27:04 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:40.889 03:27:04 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:40.889 03:27:04 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:40.889 ************************************ 00:16:40.889 END TEST test_create_ublk 00:16:40.889 ************************************ 00:16:40.889 00:16:40.889 real 0m11.936s 00:16:40.889 user 0m0.495s 00:16:40.889 sys 0m0.634s 00:16:40.889 03:27:04 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:40.889 03:27:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.889 03:27:04 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:40.889 03:27:04 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:40.889 03:27:04 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:40.889 03:27:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.889 ************************************ 00:16:40.889 START TEST test_create_multi_ublk 00:16:40.889 ************************************ 00:16:40.889 03:27:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:16:40.889 03:27:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:40.889 03:27:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.889 03:27:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.889 [2024-11-05 03:27:04.363320] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:40.889 [2024-11-05 03:27:04.366614] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:40.889 03:27:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.889 03:27:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:40.889 03:27:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:40.889 03:27:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:40.889 03:27:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:40.889 03:27:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.889 03:27:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.148 03:27:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.148 03:27:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:41.148 03:27:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:41.148 03:27:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.148 03:27:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.148 [2024-11-05 03:27:04.696562] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:41.148 [2024-11-05 03:27:04.697136] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:41.148 [2024-11-05 03:27:04.697156] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:41.148 [2024-11-05 03:27:04.697174] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:41.148 [2024-11-05 03:27:04.704371] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:41.148 [2024-11-05 03:27:04.704408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:41.148 [2024-11-05 03:27:04.712344] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:41.148 [2024-11-05 03:27:04.713058] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:41.148 [2024-11-05 03:27:04.724400] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:41.407 03:27:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.407 03:27:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:41.407 03:27:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:41.407 03:27:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:41.407 03:27:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.407 03:27:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.666 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.666 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:41.666 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:41.666 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.666 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.666 [2024-11-05 03:27:05.071540] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:41.666 [2024-11-05 03:27:05.072123] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:41.666 [2024-11-05 03:27:05.072149] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:41.666 [2024-11-05 03:27:05.072159] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:41.666 [2024-11-05 03:27:05.080773] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:41.666 [2024-11-05 03:27:05.080810] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:41.666 [2024-11-05 03:27:05.087364] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:41.666 [2024-11-05 03:27:05.088160] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:41.666 [2024-11-05 03:27:05.096409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:41.666 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.666 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:41.666 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:41.666 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:41.666 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.666 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.926 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.926 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:41.926 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:41.926 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.926 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.926 [2024-11-05 03:27:05.453480] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:41.926 [2024-11-05 03:27:05.454030] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:41.926 [2024-11-05 03:27:05.454051] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:41.926 [2024-11-05 03:27:05.454065] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:41.926 [2024-11-05 03:27:05.461353] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:41.926 [2024-11-05 03:27:05.461389] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:41.926 [2024-11-05 03:27:05.469332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:41.926 [2024-11-05 03:27:05.469995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:41.926 [2024-11-05 03:27:05.473059] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:41.926 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.926 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:41.926 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:41.926 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:41.926 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.926 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:42.493 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.493 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:42.493 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:42.493 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.493 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:42.493 [2024-11-05 03:27:05.816538] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:42.493 [2024-11-05 03:27:05.817068] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:42.493 [2024-11-05 03:27:05.817092] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:42.493 [2024-11-05 03:27:05.817102] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:42.493 [2024-11-05 03:27:05.825805] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:42.493 [2024-11-05 03:27:05.825835] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:42.493 [2024-11-05 03:27:05.832344] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:42.493 [2024-11-05 03:27:05.832978] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:42.493 [2024-11-05 03:27:05.837013] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:42.493 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.493 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:42.493 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:42.493 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.493 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:42.494 03:27:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.494 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:42.494 { 00:16:42.494 "ublk_device": "/dev/ublkb0", 00:16:42.494 "id": 0, 00:16:42.494 "queue_depth": 512, 00:16:42.494 "num_queues": 4, 00:16:42.494 "bdev_name": "Malloc0" 00:16:42.494 }, 00:16:42.494 { 00:16:42.494 "ublk_device": "/dev/ublkb1", 00:16:42.494 "id": 1, 00:16:42.494 "queue_depth": 512, 00:16:42.494 "num_queues": 4, 00:16:42.494 "bdev_name": "Malloc1" 00:16:42.494 }, 00:16:42.494 { 00:16:42.494 "ublk_device": "/dev/ublkb2", 00:16:42.494 "id": 2, 00:16:42.494 "queue_depth": 512, 00:16:42.494 "num_queues": 4, 00:16:42.494 "bdev_name": "Malloc2" 00:16:42.494 }, 00:16:42.494 { 00:16:42.494 "ublk_device": "/dev/ublkb3", 00:16:42.494 "id": 3, 00:16:42.494 "queue_depth": 512, 00:16:42.494 "num_queues": 4, 00:16:42.494 "bdev_name": "Malloc3" 00:16:42.494 } 00:16:42.494 ]' 00:16:42.494 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:42.494 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:42.494 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:42.494 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:42.494 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:42.494 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:42.494 03:27:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:42.494 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:42.494 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:42.494 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:42.494 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:42.753 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:43.012 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:43.012 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:43.012 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:43.012 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:43.012 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:43.012 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:43.012 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:43.012 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:43.012 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.012 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:43.012 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:43.012 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:43.295 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:43.295 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:43.295 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:43.295 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:43.295 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:43.295 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:43.295 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:43.295 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:43.295 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:43.295 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.295 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:43.295 03:27:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.295 03:27:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.295 [2024-11-05 03:27:06.759549] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:43.295 [2024-11-05 03:27:06.799998] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:43.295 [2024-11-05 03:27:06.801599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:43.295 [2024-11-05 03:27:06.807388] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:43.295 [2024-11-05 03:27:06.807791] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:43.295 [2024-11-05 03:27:06.807817] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:43.295 03:27:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.296 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.296 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:43.296 03:27:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.296 03:27:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.296 [2024-11-05 03:27:06.823431] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:43.296 [2024-11-05 03:27:06.854852] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:43.296 [2024-11-05 03:27:06.856354] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:43.296 [2024-11-05 03:27:06.863348] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:43.296 [2024-11-05 03:27:06.863684] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:43.296 [2024-11-05 03:27:06.863707] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:43.296 03:27:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.296 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.296 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:43.296 03:27:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.296 03:27:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.555 [2024-11-05 03:27:06.879454] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:43.555 [2024-11-05 03:27:06.913410] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:43.555 [2024-11-05 03:27:06.914495] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:43.555 [2024-11-05 03:27:06.922389] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:43.555 [2024-11-05 03:27:06.922755] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:43.555 [2024-11-05 03:27:06.922779] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:43.555 03:27:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.555 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.555 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:43.555 03:27:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.555 03:27:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.555 [2024-11-05 03:27:06.937470] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:43.555 [2024-11-05 03:27:06.969386] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:43.555 [2024-11-05 03:27:06.970229] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:43.555 [2024-11-05 03:27:06.978362] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:43.555 [2024-11-05 03:27:06.978678] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:43.555 [2024-11-05 03:27:06.978697] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:43.555 03:27:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.555 03:27:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:43.814 [2024-11-05 03:27:07.182496] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:43.814 [2024-11-05 03:27:07.190323] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:43.814 [2024-11-05 03:27:07.190379] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:43.814 03:27:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:43.814 03:27:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.814 03:27:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:43.814 03:27:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.814 03:27:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.751 03:27:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.751 03:27:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.751 03:27:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:44.751 03:27:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.751 03:27:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.009 03:27:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.009 03:27:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:45.009 03:27:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:45.009 03:27:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.009 03:27:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.268 03:27:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.268 03:27:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:45.268 03:27:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:45.268 03:27:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.268 03:27:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:45.837 00:16:45.837 real 0m4.983s 00:16:45.837 user 0m1.010s 00:16:45.837 sys 0m0.253s 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:45.837 ************************************ 00:16:45.837 END TEST test_create_multi_ublk 00:16:45.837 ************************************ 00:16:45.837 03:27:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.837 03:27:09 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:45.837 03:27:09 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:45.837 03:27:09 ublk -- ublk/ublk.sh@130 -- # killprocess 72649 00:16:45.837 03:27:09 ublk -- common/autotest_common.sh@952 -- # '[' -z 72649 ']' 00:16:45.837 03:27:09 ublk -- common/autotest_common.sh@956 -- # kill -0 72649 00:16:45.837 03:27:09 ublk -- common/autotest_common.sh@957 -- # uname 00:16:45.837 03:27:09 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:45.837 03:27:09 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72649 00:16:46.097 03:27:09 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:46.097 03:27:09 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:46.097 killing process with pid 72649 00:16:46.097 03:27:09 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72649' 00:16:46.097 03:27:09 ublk -- common/autotest_common.sh@971 -- # kill 72649 00:16:46.097 03:27:09 ublk -- common/autotest_common.sh@976 -- # wait 72649 00:16:47.475 [2024-11-05 03:27:10.695534] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:47.475 [2024-11-05 03:27:10.695643] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:48.854 00:16:48.854 real 0m32.338s 00:16:48.854 user 0m46.293s 00:16:48.855 sys 0m9.855s 00:16:48.855 03:27:12 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:48.855 03:27:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.855 ************************************ 00:16:48.855 END TEST ublk 00:16:48.855 ************************************ 00:16:48.855 03:27:12 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:48.855 03:27:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:48.855 03:27:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:48.855 03:27:12 -- common/autotest_common.sh@10 -- # set +x 00:16:48.855 ************************************ 00:16:48.855 START TEST ublk_recovery 00:16:48.855 ************************************ 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:48.855 * Looking for test storage... 00:16:48.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.855 03:27:12 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:48.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.855 --rc genhtml_branch_coverage=1 00:16:48.855 --rc genhtml_function_coverage=1 00:16:48.855 --rc genhtml_legend=1 00:16:48.855 --rc geninfo_all_blocks=1 00:16:48.855 --rc geninfo_unexecuted_blocks=1 00:16:48.855 00:16:48.855 ' 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:48.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.855 --rc genhtml_branch_coverage=1 00:16:48.855 --rc genhtml_function_coverage=1 00:16:48.855 --rc genhtml_legend=1 00:16:48.855 --rc geninfo_all_blocks=1 00:16:48.855 --rc geninfo_unexecuted_blocks=1 00:16:48.855 00:16:48.855 ' 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:48.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.855 --rc genhtml_branch_coverage=1 00:16:48.855 --rc genhtml_function_coverage=1 00:16:48.855 --rc genhtml_legend=1 00:16:48.855 --rc geninfo_all_blocks=1 00:16:48.855 --rc geninfo_unexecuted_blocks=1 00:16:48.855 00:16:48.855 ' 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:48.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.855 --rc genhtml_branch_coverage=1 00:16:48.855 --rc genhtml_function_coverage=1 00:16:48.855 --rc genhtml_legend=1 00:16:48.855 --rc geninfo_all_blocks=1 00:16:48.855 --rc geninfo_unexecuted_blocks=1 00:16:48.855 00:16:48.855 ' 00:16:48.855 03:27:12 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:48.855 03:27:12 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:48.855 03:27:12 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:48.855 03:27:12 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:48.855 03:27:12 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:48.855 03:27:12 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:48.855 03:27:12 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:48.855 03:27:12 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:48.855 03:27:12 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:48.855 03:27:12 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:48.855 03:27:12 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73093 00:16:48.855 03:27:12 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:48.855 03:27:12 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:48.855 03:27:12 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73093 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 73093 ']' 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:48.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:48.855 03:27:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.114 [2024-11-05 03:27:12.454030] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:16:49.114 [2024-11-05 03:27:12.454177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73093 ] 00:16:49.114 [2024-11-05 03:27:12.635901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:49.373 [2024-11-05 03:27:12.753700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.373 [2024-11-05 03:27:12.753734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.309 03:27:13 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:50.309 03:27:13 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:16:50.309 03:27:13 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:50.309 03:27:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.309 03:27:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.309 [2024-11-05 03:27:13.678313] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:50.309 [2024-11-05 03:27:13.681340] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:50.309 03:27:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.309 03:27:13 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:50.309 03:27:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.309 03:27:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.309 malloc0 00:16:50.309 03:27:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.309 03:27:13 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:50.309 03:27:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.309 03:27:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.309 [2024-11-05 03:27:13.862490] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:50.309 [2024-11-05 03:27:13.862613] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:50.309 [2024-11-05 03:27:13.862630] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:50.309 [2024-11-05 03:27:13.862643] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:50.309 [2024-11-05 03:27:13.871433] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:50.309 [2024-11-05 03:27:13.871459] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:50.309 [2024-11-05 03:27:13.878326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:50.309 [2024-11-05 03:27:13.878483] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:50.309 [2024-11-05 03:27:13.889330] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:50.567 1 00:16:50.567 03:27:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.567 03:27:13 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:51.500 03:27:14 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73134 00:16:51.500 03:27:14 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:51.500 03:27:14 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:51.500 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:51.500 fio-3.35 00:16:51.501 Starting 1 process 00:16:56.768 03:27:19 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73093 00:16:56.768 03:27:19 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:02.036 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73093 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:02.036 03:27:24 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73241 00:17:02.036 03:27:24 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:02.036 03:27:24 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:02.036 03:27:24 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73241 00:17:02.036 03:27:24 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 73241 ']' 00:17:02.036 03:27:24 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.036 03:27:24 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:02.036 03:27:24 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.036 03:27:24 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:02.036 03:27:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.036 [2024-11-05 03:27:25.025934] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:17:02.036 [2024-11-05 03:27:25.026047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73241 ] 00:17:02.036 [2024-11-05 03:27:25.191226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:02.036 [2024-11-05 03:27:25.355542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.036 [2024-11-05 03:27:25.355576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.974 03:27:26 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:02.974 03:27:26 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:17:02.974 03:27:26 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:02.974 03:27:26 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.974 03:27:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.974 [2024-11-05 03:27:26.276311] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:02.974 [2024-11-05 03:27:26.279061] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:02.974 03:27:26 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.974 03:27:26 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:02.974 03:27:26 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.974 03:27:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.974 malloc0 00:17:02.974 03:27:26 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.974 03:27:26 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:02.974 03:27:26 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.974 03:27:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.974 [2024-11-05 03:27:26.425528] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:02.974 [2024-11-05 03:27:26.425573] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:02.974 [2024-11-05 03:27:26.425586] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:02.974 [2024-11-05 03:27:26.433376] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:02.974 [2024-11-05 03:27:26.433403] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:17:02.974 [2024-11-05 03:27:26.433415] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:02.974 [2024-11-05 03:27:26.433518] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:02.974 1 00:17:02.974 03:27:26 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.974 03:27:26 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73134 00:17:02.974 [2024-11-05 03:27:26.441327] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:02.974 [2024-11-05 03:27:26.448174] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:02.974 [2024-11-05 03:27:26.454364] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:02.974 [2024-11-05 03:27:26.454391] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:59.212 00:17:59.212 fio_test: (groupid=0, jobs=1): err= 0: pid=73137: Tue Nov 5 03:28:15 2024 00:17:59.212 read: IOPS=18.0k, BW=70.3MiB/s (73.7MB/s)(4217MiB/60002msec) 00:17:59.212 slat (usec): min=2, max=467, avg= 9.25, stdev= 2.66 00:17:59.212 clat (usec): min=1063, max=6554.5k, avg=3394.07, stdev=43225.88 00:17:59.212 lat (usec): min=1071, max=6554.5k, avg=3403.33, stdev=43225.89 00:17:59.212 clat percentiles (usec): 00:17:59.212 | 1.00th=[ 2212], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2802], 00:17:59.212 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3097], 00:17:59.212 | 70.00th=[ 3130], 80.00th=[ 3195], 90.00th=[ 3523], 95.00th=[ 4228], 00:17:59.212 | 99.00th=[ 6390], 99.50th=[ 6849], 99.90th=[ 8979], 99.95th=[ 9241], 00:17:59.212 | 99.99th=[13566] 00:17:59.212 bw ( KiB/s): min=14192, max=102864, per=100.00%, avg=80051.93, stdev=10984.58, samples=107 00:17:59.212 iops : min= 3548, max=25716, avg=20012.93, stdev=2746.14, samples=107 00:17:59.212 write: IOPS=18.0k, BW=70.2MiB/s (73.6MB/s)(4214MiB/60002msec); 0 zone resets 00:17:59.212 slat (usec): min=2, max=336, avg= 9.34, stdev= 2.76 00:17:59.212 clat (usec): min=1074, max=6555.1k, avg=3703.10, stdev=56763.05 00:17:59.212 lat (usec): min=1080, max=6555.1k, avg=3712.44, stdev=56763.06 00:17:59.212 clat percentiles (usec): 00:17:59.212 | 1.00th=[ 2245], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2868], 00:17:59.212 | 30.00th=[ 3032], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3228], 00:17:59.212 | 70.00th=[ 3261], 80.00th=[ 3326], 90.00th=[ 3621], 95.00th=[ 4228], 00:17:59.212 | 99.00th=[ 6390], 99.50th=[ 6980], 99.90th=[ 9241], 99.95th=[ 9503], 00:17:59.212 | 99.99th=[13829] 00:17:59.212 bw ( KiB/s): min=14680, max=103744, per=100.00%, avg=79999.30, stdev=11019.72, samples=107 00:17:59.212 iops : min= 3670, max=25936, avg=19999.79, stdev=2754.93, samples=107 00:17:59.212 lat (msec) : 2=0.16%, 4=93.73%, 10=6.07%, 20=0.03%, >=2000=0.01% 00:17:59.212 cpu : usr=12.28%, sys=33.33%, ctx=95812, majf=0, minf=13 00:17:59.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:59.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:59.212 issued rwts: total=1079432,1078717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.212 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:59.212 00:17:59.212 Run status group 0 (all jobs): 00:17:59.212 READ: bw=70.3MiB/s (73.7MB/s), 70.3MiB/s-70.3MiB/s (73.7MB/s-73.7MB/s), io=4217MiB (4421MB), run=60002-60002msec 00:17:59.212 WRITE: bw=70.2MiB/s (73.6MB/s), 70.2MiB/s-70.2MiB/s (73.6MB/s-73.6MB/s), io=4214MiB (4418MB), run=60002-60002msec 00:17:59.212 00:17:59.212 Disk stats (read/write): 00:17:59.212 ublkb1: ios=1077112/1076473, merge=0/0, ticks=3545664/3740804, in_queue=7286468, util=99.96% 00:17:59.212 03:28:15 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:59.212 03:28:15 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.212 03:28:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.212 [2024-11-05 03:28:15.183682] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:59.212 [2024-11-05 03:28:15.226388] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:59.212 [2024-11-05 03:28:15.226722] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:59.212 [2024-11-05 03:28:15.238382] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:59.212 [2024-11-05 03:28:15.238551] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:59.212 [2024-11-05 03:28:15.238565] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:59.212 03:28:15 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.212 03:28:15 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:59.212 03:28:15 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.212 03:28:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.212 [2024-11-05 03:28:15.252420] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:59.212 [2024-11-05 03:28:15.261182] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:59.212 [2024-11-05 03:28:15.261221] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:59.212 03:28:15 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.213 03:28:15 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:59.213 03:28:15 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:59.213 03:28:15 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73241 00:17:59.213 03:28:15 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 73241 ']' 00:17:59.213 03:28:15 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 73241 00:17:59.213 03:28:15 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:17:59.213 03:28:15 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:59.213 03:28:15 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73241 00:17:59.213 killing process with pid 73241 00:17:59.213 03:28:15 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:59.213 03:28:15 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:59.213 03:28:15 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73241' 00:17:59.213 03:28:15 ublk_recovery -- common/autotest_common.sh@971 -- # kill 73241 00:17:59.213 03:28:15 ublk_recovery -- common/autotest_common.sh@976 -- # wait 73241 00:17:59.213 [2024-11-05 03:28:16.928794] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:59.213 [2024-11-05 03:28:16.928882] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:59.213 00:17:59.213 real 1m6.251s 00:17:59.213 user 1m51.004s 00:17:59.213 sys 0m37.854s 00:17:59.213 03:28:18 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:59.213 03:28:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.213 ************************************ 00:17:59.213 END TEST ublk_recovery 00:17:59.213 ************************************ 00:17:59.213 03:28:18 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:59.213 03:28:18 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:59.213 03:28:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:59.213 03:28:18 -- common/autotest_common.sh@10 -- # set +x 00:17:59.213 03:28:18 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:59.213 03:28:18 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:59.213 03:28:18 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:59.213 03:28:18 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:59.213 03:28:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:59.213 03:28:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:59.213 03:28:18 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:59.213 03:28:18 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:59.213 03:28:18 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:59.213 03:28:18 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:17:59.213 03:28:18 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:59.213 03:28:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:59.213 03:28:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:59.213 03:28:18 -- common/autotest_common.sh@10 -- # set +x 00:17:59.213 ************************************ 00:17:59.213 START TEST ftl 00:17:59.213 ************************************ 00:17:59.213 03:28:18 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:59.213 * Looking for test storage... 00:17:59.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:59.213 03:28:18 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:59.213 03:28:18 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:17:59.213 03:28:18 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:59.213 03:28:18 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:59.213 03:28:18 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:59.213 03:28:18 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:59.213 03:28:18 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:59.213 03:28:18 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.213 03:28:18 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:59.213 03:28:18 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:59.213 03:28:18 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:59.213 03:28:18 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:59.213 03:28:18 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:59.213 03:28:18 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:59.213 03:28:18 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:59.213 03:28:18 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:59.213 03:28:18 ftl -- scripts/common.sh@345 -- # : 1 00:17:59.213 03:28:18 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:59.213 03:28:18 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.213 03:28:18 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:59.213 03:28:18 ftl -- scripts/common.sh@353 -- # local d=1 00:17:59.213 03:28:18 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.213 03:28:18 ftl -- scripts/common.sh@355 -- # echo 1 00:17:59.213 03:28:18 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:59.213 03:28:18 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:59.213 03:28:18 ftl -- scripts/common.sh@353 -- # local d=2 00:17:59.213 03:28:18 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.213 03:28:18 ftl -- scripts/common.sh@355 -- # echo 2 00:17:59.213 03:28:18 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:59.213 03:28:18 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:59.213 03:28:18 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:59.213 03:28:18 ftl -- scripts/common.sh@368 -- # return 0 00:17:59.213 03:28:18 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.213 03:28:18 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:59.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.213 --rc genhtml_branch_coverage=1 00:17:59.213 --rc genhtml_function_coverage=1 00:17:59.213 --rc genhtml_legend=1 00:17:59.213 --rc geninfo_all_blocks=1 00:17:59.213 --rc geninfo_unexecuted_blocks=1 00:17:59.213 00:17:59.213 ' 00:17:59.213 03:28:18 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:59.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.213 --rc genhtml_branch_coverage=1 00:17:59.213 --rc genhtml_function_coverage=1 00:17:59.213 --rc genhtml_legend=1 00:17:59.213 --rc geninfo_all_blocks=1 00:17:59.213 --rc geninfo_unexecuted_blocks=1 00:17:59.213 00:17:59.213 ' 00:17:59.213 03:28:18 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:59.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.213 --rc genhtml_branch_coverage=1 00:17:59.213 --rc genhtml_function_coverage=1 00:17:59.213 --rc genhtml_legend=1 00:17:59.213 --rc geninfo_all_blocks=1 00:17:59.213 --rc geninfo_unexecuted_blocks=1 00:17:59.213 00:17:59.213 ' 00:17:59.213 03:28:18 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:59.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.213 --rc genhtml_branch_coverage=1 00:17:59.213 --rc genhtml_function_coverage=1 00:17:59.213 --rc genhtml_legend=1 00:17:59.213 --rc geninfo_all_blocks=1 00:17:59.213 --rc geninfo_unexecuted_blocks=1 00:17:59.213 00:17:59.213 ' 00:17:59.213 03:28:18 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:59.213 03:28:18 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:59.213 03:28:18 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:59.213 03:28:18 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:59.213 03:28:18 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:59.213 03:28:18 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:59.213 03:28:18 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:59.213 03:28:18 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:59.213 03:28:18 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:59.213 03:28:18 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.213 03:28:18 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.213 03:28:18 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:59.213 03:28:18 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:59.213 03:28:18 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:59.213 03:28:18 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:59.213 03:28:18 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:59.213 03:28:18 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:59.213 03:28:18 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.213 03:28:18 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.213 03:28:18 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:59.213 03:28:18 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:59.213 03:28:18 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:59.213 03:28:18 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:59.213 03:28:18 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:59.213 03:28:18 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:59.213 03:28:18 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:59.213 03:28:18 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:59.213 03:28:18 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:59.213 03:28:18 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:59.213 03:28:18 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:59.213 03:28:18 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:59.213 03:28:18 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:59.213 03:28:18 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:59.213 03:28:18 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:59.213 03:28:18 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:59.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:59.213 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:59.213 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:59.213 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:59.213 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:59.213 03:28:19 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74053 00:17:59.214 03:28:19 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:59.214 03:28:19 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74053 00:17:59.214 03:28:19 ftl -- common/autotest_common.sh@833 -- # '[' -z 74053 ']' 00:17:59.214 03:28:19 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.214 03:28:19 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:59.214 03:28:19 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.214 03:28:19 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:59.214 03:28:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:59.214 [2024-11-05 03:28:19.700607] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:17:59.214 [2024-11-05 03:28:19.701318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74053 ] 00:17:59.214 [2024-11-05 03:28:19.887048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.214 [2024-11-05 03:28:19.998587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.214 03:28:20 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:59.214 03:28:20 ftl -- common/autotest_common.sh@866 -- # return 0 00:17:59.214 03:28:20 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:59.214 03:28:20 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:59.214 03:28:21 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:59.214 03:28:21 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@50 -- # break 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@63 -- # break 00:17:59.214 03:28:22 ftl -- ftl/ftl.sh@66 -- # killprocess 74053 00:17:59.214 03:28:22 ftl -- common/autotest_common.sh@952 -- # '[' -z 74053 ']' 00:17:59.214 03:28:22 ftl -- common/autotest_common.sh@956 -- # kill -0 74053 00:17:59.214 03:28:22 ftl -- common/autotest_common.sh@957 -- # uname 00:17:59.214 03:28:22 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:59.214 03:28:22 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74053 00:17:59.214 killing process with pid 74053 00:17:59.214 03:28:22 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:59.214 03:28:22 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:59.214 03:28:22 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74053' 00:17:59.214 03:28:22 ftl -- common/autotest_common.sh@971 -- # kill 74053 00:17:59.214 03:28:22 ftl -- common/autotest_common.sh@976 -- # wait 74053 00:18:01.748 03:28:25 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:01.748 03:28:25 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:01.748 03:28:25 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:01.748 03:28:25 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:01.748 03:28:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:01.748 ************************************ 00:18:01.748 START TEST ftl_fio_basic 00:18:01.748 ************************************ 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:01.748 * Looking for test storage... 00:18:01.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:01.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.748 --rc genhtml_branch_coverage=1 00:18:01.748 --rc genhtml_function_coverage=1 00:18:01.748 --rc genhtml_legend=1 00:18:01.748 --rc geninfo_all_blocks=1 00:18:01.748 --rc geninfo_unexecuted_blocks=1 00:18:01.748 00:18:01.748 ' 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:01.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.748 --rc genhtml_branch_coverage=1 00:18:01.748 --rc genhtml_function_coverage=1 00:18:01.748 --rc genhtml_legend=1 00:18:01.748 --rc geninfo_all_blocks=1 00:18:01.748 --rc geninfo_unexecuted_blocks=1 00:18:01.748 00:18:01.748 ' 00:18:01.748 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:01.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.749 --rc genhtml_branch_coverage=1 00:18:01.749 --rc genhtml_function_coverage=1 00:18:01.749 --rc genhtml_legend=1 00:18:01.749 --rc geninfo_all_blocks=1 00:18:01.749 --rc geninfo_unexecuted_blocks=1 00:18:01.749 00:18:01.749 ' 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:01.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.749 --rc genhtml_branch_coverage=1 00:18:01.749 --rc genhtml_function_coverage=1 00:18:01.749 --rc genhtml_legend=1 00:18:01.749 --rc geninfo_all_blocks=1 00:18:01.749 --rc geninfo_unexecuted_blocks=1 00:18:01.749 00:18:01.749 ' 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:01.749 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:02.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.008 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:02.008 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:02.008 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:02.008 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:02.008 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74202 00:18:02.008 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74202 00:18:02.008 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 74202 ']' 00:18:02.008 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.008 03:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:02.008 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:02.008 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.008 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:02.008 03:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:02.008 [2024-11-05 03:28:25.439655] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:18:02.008 [2024-11-05 03:28:25.440005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74202 ] 00:18:02.266 [2024-11-05 03:28:25.623689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:02.266 [2024-11-05 03:28:25.738702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.266 [2024-11-05 03:28:25.738812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.266 [2024-11-05 03:28:25.738844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.203 03:28:26 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:03.203 03:28:26 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:18:03.203 03:28:26 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:03.203 03:28:26 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:03.203 03:28:26 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:03.203 03:28:26 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:03.203 03:28:26 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:03.203 03:28:26 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:03.461 03:28:26 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:03.461 03:28:26 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:03.461 03:28:26 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:03.461 03:28:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:18:03.461 03:28:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:03.461 03:28:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:03.461 03:28:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:03.461 03:28:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:03.719 03:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:03.719 { 00:18:03.719 "name": "nvme0n1", 00:18:03.719 "aliases": [ 00:18:03.719 "1e31f1e9-e099-44d7-8e0c-3555acb2977a" 00:18:03.719 ], 00:18:03.719 "product_name": "NVMe disk", 00:18:03.719 "block_size": 4096, 00:18:03.719 "num_blocks": 1310720, 00:18:03.719 "uuid": "1e31f1e9-e099-44d7-8e0c-3555acb2977a", 00:18:03.719 "numa_id": -1, 00:18:03.720 "assigned_rate_limits": { 00:18:03.720 "rw_ios_per_sec": 0, 00:18:03.720 "rw_mbytes_per_sec": 0, 00:18:03.720 "r_mbytes_per_sec": 0, 00:18:03.720 "w_mbytes_per_sec": 0 00:18:03.720 }, 00:18:03.720 "claimed": false, 00:18:03.720 "zoned": false, 00:18:03.720 "supported_io_types": { 00:18:03.720 "read": true, 00:18:03.720 "write": true, 00:18:03.720 "unmap": true, 00:18:03.720 "flush": true, 00:18:03.720 "reset": true, 00:18:03.720 "nvme_admin": true, 00:18:03.720 "nvme_io": true, 00:18:03.720 "nvme_io_md": false, 00:18:03.720 "write_zeroes": true, 00:18:03.720 "zcopy": false, 00:18:03.720 "get_zone_info": false, 00:18:03.720 "zone_management": false, 00:18:03.720 "zone_append": false, 00:18:03.720 "compare": true, 00:18:03.720 "compare_and_write": false, 00:18:03.720 "abort": true, 00:18:03.720 "seek_hole": false, 00:18:03.720 "seek_data": false, 00:18:03.720 "copy": true, 00:18:03.720 "nvme_iov_md": false 00:18:03.720 }, 00:18:03.720 "driver_specific": { 00:18:03.720 "nvme": [ 00:18:03.720 { 00:18:03.720 "pci_address": "0000:00:11.0", 00:18:03.720 "trid": { 00:18:03.720 "trtype": "PCIe", 00:18:03.720 "traddr": "0000:00:11.0" 00:18:03.720 }, 00:18:03.720 "ctrlr_data": { 00:18:03.720 "cntlid": 0, 00:18:03.720 "vendor_id": "0x1b36", 00:18:03.720 "model_number": "QEMU NVMe Ctrl", 00:18:03.720 "serial_number": "12341", 00:18:03.720 "firmware_revision": "8.0.0", 00:18:03.720 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:03.720 "oacs": { 00:18:03.720 "security": 0, 00:18:03.720 "format": 1, 00:18:03.720 "firmware": 0, 00:18:03.720 "ns_manage": 1 00:18:03.720 }, 00:18:03.720 "multi_ctrlr": false, 00:18:03.720 "ana_reporting": false 00:18:03.720 }, 00:18:03.720 "vs": { 00:18:03.720 "nvme_version": "1.4" 00:18:03.720 }, 00:18:03.720 "ns_data": { 00:18:03.720 "id": 1, 00:18:03.720 "can_share": false 00:18:03.720 } 00:18:03.720 } 00:18:03.720 ], 00:18:03.720 "mp_policy": "active_passive" 00:18:03.720 } 00:18:03.720 } 00:18:03.720 ]' 00:18:03.720 03:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:03.720 03:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:03.720 03:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:03.720 03:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:18:03.720 03:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:18:03.720 03:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:18:03.720 03:28:27 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:03.720 03:28:27 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:03.720 03:28:27 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:03.720 03:28:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:03.720 03:28:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:03.978 03:28:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:03.978 03:28:27 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:04.237 03:28:27 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=3388ebc2-2378-47e5-a70b-60b197424d3f 00:18:04.237 03:28:27 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3388ebc2-2378-47e5-a70b-60b197424d3f 00:18:04.237 03:28:27 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b065cd99-b782-4660-a462-2fbed21cd0ac 00:18:04.237 03:28:27 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b065cd99-b782-4660-a462-2fbed21cd0ac 00:18:04.237 03:28:27 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:04.237 03:28:27 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:04.237 03:28:27 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b065cd99-b782-4660-a462-2fbed21cd0ac 00:18:04.237 03:28:27 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:04.237 03:28:27 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b065cd99-b782-4660-a462-2fbed21cd0ac 00:18:04.237 03:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=b065cd99-b782-4660-a462-2fbed21cd0ac 00:18:04.237 03:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:04.237 03:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:04.237 03:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:04.237 03:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b065cd99-b782-4660-a462-2fbed21cd0ac 00:18:04.496 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:04.496 { 00:18:04.496 "name": "b065cd99-b782-4660-a462-2fbed21cd0ac", 00:18:04.496 "aliases": [ 00:18:04.496 "lvs/nvme0n1p0" 00:18:04.496 ], 00:18:04.496 "product_name": "Logical Volume", 00:18:04.496 "block_size": 4096, 00:18:04.496 "num_blocks": 26476544, 00:18:04.496 "uuid": "b065cd99-b782-4660-a462-2fbed21cd0ac", 00:18:04.496 "assigned_rate_limits": { 00:18:04.496 "rw_ios_per_sec": 0, 00:18:04.496 "rw_mbytes_per_sec": 0, 00:18:04.496 "r_mbytes_per_sec": 0, 00:18:04.496 "w_mbytes_per_sec": 0 00:18:04.496 }, 00:18:04.496 "claimed": false, 00:18:04.496 "zoned": false, 00:18:04.496 "supported_io_types": { 00:18:04.496 "read": true, 00:18:04.496 "write": true, 00:18:04.496 "unmap": true, 00:18:04.496 "flush": false, 00:18:04.496 "reset": true, 00:18:04.496 "nvme_admin": false, 00:18:04.496 "nvme_io": false, 00:18:04.496 "nvme_io_md": false, 00:18:04.496 "write_zeroes": true, 00:18:04.496 "zcopy": false, 00:18:04.496 "get_zone_info": false, 00:18:04.496 "zone_management": false, 00:18:04.496 "zone_append": false, 00:18:04.496 "compare": false, 00:18:04.496 "compare_and_write": false, 00:18:04.496 "abort": false, 00:18:04.496 "seek_hole": true, 00:18:04.496 "seek_data": true, 00:18:04.496 "copy": false, 00:18:04.496 "nvme_iov_md": false 00:18:04.496 }, 00:18:04.496 "driver_specific": { 00:18:04.496 "lvol": { 00:18:04.496 "lvol_store_uuid": "3388ebc2-2378-47e5-a70b-60b197424d3f", 00:18:04.496 "base_bdev": "nvme0n1", 00:18:04.496 "thin_provision": true, 00:18:04.496 "num_allocated_clusters": 0, 00:18:04.496 "snapshot": false, 00:18:04.496 "clone": false, 00:18:04.496 "esnap_clone": false 00:18:04.496 } 00:18:04.496 } 00:18:04.496 } 00:18:04.496 ]' 00:18:04.496 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:04.496 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:04.496 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:04.754 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:04.754 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:04.754 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:18:04.754 03:28:28 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:04.754 03:28:28 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:04.754 03:28:28 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:05.013 03:28:28 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:05.013 03:28:28 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:05.013 03:28:28 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b065cd99-b782-4660-a462-2fbed21cd0ac 00:18:05.013 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=b065cd99-b782-4660-a462-2fbed21cd0ac 00:18:05.013 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:05.013 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:05.013 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:05.013 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b065cd99-b782-4660-a462-2fbed21cd0ac 00:18:05.013 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:05.013 { 00:18:05.013 "name": "b065cd99-b782-4660-a462-2fbed21cd0ac", 00:18:05.013 "aliases": [ 00:18:05.013 "lvs/nvme0n1p0" 00:18:05.013 ], 00:18:05.013 "product_name": "Logical Volume", 00:18:05.013 "block_size": 4096, 00:18:05.013 "num_blocks": 26476544, 00:18:05.013 "uuid": "b065cd99-b782-4660-a462-2fbed21cd0ac", 00:18:05.013 "assigned_rate_limits": { 00:18:05.013 "rw_ios_per_sec": 0, 00:18:05.013 "rw_mbytes_per_sec": 0, 00:18:05.013 "r_mbytes_per_sec": 0, 00:18:05.013 "w_mbytes_per_sec": 0 00:18:05.013 }, 00:18:05.013 "claimed": false, 00:18:05.013 "zoned": false, 00:18:05.013 "supported_io_types": { 00:18:05.013 "read": true, 00:18:05.013 "write": true, 00:18:05.013 "unmap": true, 00:18:05.013 "flush": false, 00:18:05.013 "reset": true, 00:18:05.013 "nvme_admin": false, 00:18:05.013 "nvme_io": false, 00:18:05.013 "nvme_io_md": false, 00:18:05.013 "write_zeroes": true, 00:18:05.013 "zcopy": false, 00:18:05.013 "get_zone_info": false, 00:18:05.013 "zone_management": false, 00:18:05.013 "zone_append": false, 00:18:05.013 "compare": false, 00:18:05.013 "compare_and_write": false, 00:18:05.013 "abort": false, 00:18:05.013 "seek_hole": true, 00:18:05.013 "seek_data": true, 00:18:05.013 "copy": false, 00:18:05.013 "nvme_iov_md": false 00:18:05.013 }, 00:18:05.013 "driver_specific": { 00:18:05.013 "lvol": { 00:18:05.013 "lvol_store_uuid": "3388ebc2-2378-47e5-a70b-60b197424d3f", 00:18:05.013 "base_bdev": "nvme0n1", 00:18:05.013 "thin_provision": true, 00:18:05.013 "num_allocated_clusters": 0, 00:18:05.013 "snapshot": false, 00:18:05.013 "clone": false, 00:18:05.013 "esnap_clone": false 00:18:05.013 } 00:18:05.013 } 00:18:05.013 } 00:18:05.013 ]' 00:18:05.013 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:05.272 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:05.272 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:05.272 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:05.272 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:05.272 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:18:05.272 03:28:28 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:05.272 03:28:28 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:05.531 03:28:28 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:05.531 03:28:28 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:05.531 03:28:28 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:05.531 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:05.531 03:28:28 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b065cd99-b782-4660-a462-2fbed21cd0ac 00:18:05.531 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=b065cd99-b782-4660-a462-2fbed21cd0ac 00:18:05.531 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:05.531 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:05.531 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:05.531 03:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b065cd99-b782-4660-a462-2fbed21cd0ac 00:18:05.789 03:28:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:05.789 { 00:18:05.789 "name": "b065cd99-b782-4660-a462-2fbed21cd0ac", 00:18:05.789 "aliases": [ 00:18:05.789 "lvs/nvme0n1p0" 00:18:05.789 ], 00:18:05.789 "product_name": "Logical Volume", 00:18:05.789 "block_size": 4096, 00:18:05.789 "num_blocks": 26476544, 00:18:05.789 "uuid": "b065cd99-b782-4660-a462-2fbed21cd0ac", 00:18:05.789 "assigned_rate_limits": { 00:18:05.789 "rw_ios_per_sec": 0, 00:18:05.789 "rw_mbytes_per_sec": 0, 00:18:05.789 "r_mbytes_per_sec": 0, 00:18:05.789 "w_mbytes_per_sec": 0 00:18:05.789 }, 00:18:05.789 "claimed": false, 00:18:05.789 "zoned": false, 00:18:05.789 "supported_io_types": { 00:18:05.789 "read": true, 00:18:05.789 "write": true, 00:18:05.789 "unmap": true, 00:18:05.789 "flush": false, 00:18:05.789 "reset": true, 00:18:05.789 "nvme_admin": false, 00:18:05.789 "nvme_io": false, 00:18:05.789 "nvme_io_md": false, 00:18:05.789 "write_zeroes": true, 00:18:05.789 "zcopy": false, 00:18:05.789 "get_zone_info": false, 00:18:05.789 "zone_management": false, 00:18:05.789 "zone_append": false, 00:18:05.789 "compare": false, 00:18:05.789 "compare_and_write": false, 00:18:05.789 "abort": false, 00:18:05.789 "seek_hole": true, 00:18:05.789 "seek_data": true, 00:18:05.789 "copy": false, 00:18:05.789 "nvme_iov_md": false 00:18:05.789 }, 00:18:05.789 "driver_specific": { 00:18:05.789 "lvol": { 00:18:05.789 "lvol_store_uuid": "3388ebc2-2378-47e5-a70b-60b197424d3f", 00:18:05.789 "base_bdev": "nvme0n1", 00:18:05.789 "thin_provision": true, 00:18:05.789 "num_allocated_clusters": 0, 00:18:05.789 "snapshot": false, 00:18:05.789 "clone": false, 00:18:05.789 "esnap_clone": false 00:18:05.789 } 00:18:05.789 } 00:18:05.789 } 00:18:05.789 ]' 00:18:05.789 03:28:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:05.790 03:28:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:05.790 03:28:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:05.790 03:28:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:05.790 03:28:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:05.790 03:28:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:18:05.790 03:28:29 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:05.790 03:28:29 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:05.790 03:28:29 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b065cd99-b782-4660-a462-2fbed21cd0ac -c nvc0n1p0 --l2p_dram_limit 60 00:18:06.048 [2024-11-05 03:28:29.385328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.048 [2024-11-05 03:28:29.385383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:06.048 [2024-11-05 03:28:29.385403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:06.048 [2024-11-05 03:28:29.385414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.048 [2024-11-05 03:28:29.385506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.048 [2024-11-05 03:28:29.385522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:06.048 [2024-11-05 03:28:29.385536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:06.048 [2024-11-05 03:28:29.385546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.048 [2024-11-05 03:28:29.385602] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:06.048 [2024-11-05 03:28:29.386675] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:06.048 [2024-11-05 03:28:29.386704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.048 [2024-11-05 03:28:29.386723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:06.048 [2024-11-05 03:28:29.386737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:18:06.048 [2024-11-05 03:28:29.386747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.048 [2024-11-05 03:28:29.386813] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 498ad51e-e684-4c84-a95e-b166d6376afc 00:18:06.048 [2024-11-05 03:28:29.388312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.048 [2024-11-05 03:28:29.388344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:06.048 [2024-11-05 03:28:29.388357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:06.048 [2024-11-05 03:28:29.388370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.048 [2024-11-05 03:28:29.395900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.048 [2024-11-05 03:28:29.396060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:06.048 [2024-11-05 03:28:29.396083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.378 ms 00:18:06.048 [2024-11-05 03:28:29.396096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.048 [2024-11-05 03:28:29.396241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.048 [2024-11-05 03:28:29.396257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:06.048 [2024-11-05 03:28:29.396269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:06.048 [2024-11-05 03:28:29.396304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.048 [2024-11-05 03:28:29.396394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.048 [2024-11-05 03:28:29.396409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:06.048 [2024-11-05 03:28:29.396421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:06.048 [2024-11-05 03:28:29.396434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.048 [2024-11-05 03:28:29.396495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:06.048 [2024-11-05 03:28:29.401792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.048 [2024-11-05 03:28:29.401824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:06.048 [2024-11-05 03:28:29.401839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.309 ms 00:18:06.048 [2024-11-05 03:28:29.401868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.048 [2024-11-05 03:28:29.401937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.048 [2024-11-05 03:28:29.401948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:06.048 [2024-11-05 03:28:29.401962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:06.048 [2024-11-05 03:28:29.401973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.048 [2024-11-05 03:28:29.402069] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:06.048 [2024-11-05 03:28:29.402215] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:06.048 [2024-11-05 03:28:29.402237] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:06.048 [2024-11-05 03:28:29.402251] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:06.048 [2024-11-05 03:28:29.402267] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:06.048 [2024-11-05 03:28:29.402279] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:06.048 [2024-11-05 03:28:29.402310] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:06.048 [2024-11-05 03:28:29.402321] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:06.048 [2024-11-05 03:28:29.402334] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:06.048 [2024-11-05 03:28:29.402344] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:06.048 [2024-11-05 03:28:29.402359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.048 [2024-11-05 03:28:29.402374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:06.048 [2024-11-05 03:28:29.402387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:18:06.048 [2024-11-05 03:28:29.402415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.048 [2024-11-05 03:28:29.402526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.048 [2024-11-05 03:28:29.402537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:06.048 [2024-11-05 03:28:29.402550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:06.048 [2024-11-05 03:28:29.402560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.048 [2024-11-05 03:28:29.402716] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:06.048 [2024-11-05 03:28:29.402728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:06.048 [2024-11-05 03:28:29.402745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.048 [2024-11-05 03:28:29.402755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.048 [2024-11-05 03:28:29.402768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:06.049 [2024-11-05 03:28:29.402778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:06.049 [2024-11-05 03:28:29.402790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:06.049 [2024-11-05 03:28:29.402804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:06.049 [2024-11-05 03:28:29.402816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:06.049 [2024-11-05 03:28:29.402825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.049 [2024-11-05 03:28:29.402837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:06.049 [2024-11-05 03:28:29.402847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:06.049 [2024-11-05 03:28:29.402859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.049 [2024-11-05 03:28:29.402868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:06.049 [2024-11-05 03:28:29.402880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:06.049 [2024-11-05 03:28:29.402890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.049 [2024-11-05 03:28:29.402906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:06.049 [2024-11-05 03:28:29.402915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:06.049 [2024-11-05 03:28:29.402927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.049 [2024-11-05 03:28:29.402936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:06.049 [2024-11-05 03:28:29.402947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:06.049 [2024-11-05 03:28:29.402957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.049 [2024-11-05 03:28:29.402968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:06.049 [2024-11-05 03:28:29.402978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:06.049 [2024-11-05 03:28:29.402989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.049 [2024-11-05 03:28:29.402998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:06.049 [2024-11-05 03:28:29.403010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:06.049 [2024-11-05 03:28:29.403019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.049 [2024-11-05 03:28:29.403031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:06.049 [2024-11-05 03:28:29.403040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:06.049 [2024-11-05 03:28:29.403051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.049 [2024-11-05 03:28:29.403060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:06.049 [2024-11-05 03:28:29.403075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:06.049 [2024-11-05 03:28:29.403084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.049 [2024-11-05 03:28:29.403096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:06.049 [2024-11-05 03:28:29.403121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:06.049 [2024-11-05 03:28:29.403133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.049 [2024-11-05 03:28:29.403143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:06.049 [2024-11-05 03:28:29.403154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:06.049 [2024-11-05 03:28:29.403165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.049 [2024-11-05 03:28:29.403178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:06.049 [2024-11-05 03:28:29.403187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:06.049 [2024-11-05 03:28:29.403199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.049 [2024-11-05 03:28:29.403208] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:06.049 [2024-11-05 03:28:29.403220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:06.049 [2024-11-05 03:28:29.403230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.049 [2024-11-05 03:28:29.403241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.049 [2024-11-05 03:28:29.403254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:06.049 [2024-11-05 03:28:29.403269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:06.049 [2024-11-05 03:28:29.403278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:06.049 [2024-11-05 03:28:29.403300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:06.049 [2024-11-05 03:28:29.403310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:06.049 [2024-11-05 03:28:29.403322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:06.049 [2024-11-05 03:28:29.403336] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:06.049 [2024-11-05 03:28:29.403351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.049 [2024-11-05 03:28:29.403363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:06.049 [2024-11-05 03:28:29.403376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:06.049 [2024-11-05 03:28:29.403387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:06.049 [2024-11-05 03:28:29.403399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:06.049 [2024-11-05 03:28:29.403409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:06.049 [2024-11-05 03:28:29.403421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:06.049 [2024-11-05 03:28:29.403431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:06.049 [2024-11-05 03:28:29.403444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:06.049 [2024-11-05 03:28:29.403454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:06.049 [2024-11-05 03:28:29.403470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:06.049 [2024-11-05 03:28:29.403480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:06.049 [2024-11-05 03:28:29.403493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:06.049 [2024-11-05 03:28:29.403502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:06.049 [2024-11-05 03:28:29.403515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:06.049 [2024-11-05 03:28:29.403525] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:06.049 [2024-11-05 03:28:29.403539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.049 [2024-11-05 03:28:29.403554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:06.049 [2024-11-05 03:28:29.403567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:06.049 [2024-11-05 03:28:29.403578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:06.049 [2024-11-05 03:28:29.403590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:06.049 [2024-11-05 03:28:29.403601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.049 [2024-11-05 03:28:29.403614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:06.049 [2024-11-05 03:28:29.403624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:18:06.049 [2024-11-05 03:28:29.403636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.049 [2024-11-05 03:28:29.403785] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:06.049 [2024-11-05 03:28:29.403808] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:10.239 [2024-11-05 03:28:33.728426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.239 [2024-11-05 03:28:33.728498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:10.239 [2024-11-05 03:28:33.728520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4331.663 ms 00:18:10.239 [2024-11-05 03:28:33.728533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.239 [2024-11-05 03:28:33.766768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.239 [2024-11-05 03:28:33.767011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.239 [2024-11-05 03:28:33.767038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.985 ms 00:18:10.239 [2024-11-05 03:28:33.767052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.239 [2024-11-05 03:28:33.767221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.239 [2024-11-05 03:28:33.767238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:10.239 [2024-11-05 03:28:33.767249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:10.239 [2024-11-05 03:28:33.767265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.499 [2024-11-05 03:28:33.823502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.499 [2024-11-05 03:28:33.823555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.499 [2024-11-05 03:28:33.823581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.206 ms 00:18:10.499 [2024-11-05 03:28:33.823598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.499 [2024-11-05 03:28:33.823672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.499 [2024-11-05 03:28:33.823690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.499 [2024-11-05 03:28:33.823705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:10.499 [2024-11-05 03:28:33.823721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.499 [2024-11-05 03:28:33.824269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.499 [2024-11-05 03:28:33.824323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.499 [2024-11-05 03:28:33.824340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:18:10.499 [2024-11-05 03:28:33.824361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.499 [2024-11-05 03:28:33.824533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.499 [2024-11-05 03:28:33.824555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.499 [2024-11-05 03:28:33.824569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:18:10.499 [2024-11-05 03:28:33.824589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.499 [2024-11-05 03:28:33.846837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.499 [2024-11-05 03:28:33.846879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.499 [2024-11-05 03:28:33.846894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.231 ms 00:18:10.499 [2024-11-05 03:28:33.846907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.499 [2024-11-05 03:28:33.860114] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:10.499 [2024-11-05 03:28:33.876776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.499 [2024-11-05 03:28:33.876837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:10.499 [2024-11-05 03:28:33.876857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.756 ms 00:18:10.499 [2024-11-05 03:28:33.876870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.499 [2024-11-05 03:28:33.974784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.499 [2024-11-05 03:28:33.974844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:10.499 [2024-11-05 03:28:33.974868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.996 ms 00:18:10.499 [2024-11-05 03:28:33.974879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.499 [2024-11-05 03:28:33.975111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.499 [2024-11-05 03:28:33.975125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:10.499 [2024-11-05 03:28:33.975143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:18:10.499 [2024-11-05 03:28:33.975154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.499 [2024-11-05 03:28:34.011572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.499 [2024-11-05 03:28:34.011613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:10.499 [2024-11-05 03:28:34.011630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.396 ms 00:18:10.499 [2024-11-05 03:28:34.011642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.499 [2024-11-05 03:28:34.047791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.499 [2024-11-05 03:28:34.047825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:10.499 [2024-11-05 03:28:34.047843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.137 ms 00:18:10.499 [2024-11-05 03:28:34.047853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.499 [2024-11-05 03:28:34.048637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.499 [2024-11-05 03:28:34.048660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:10.499 [2024-11-05 03:28:34.048674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:18:10.499 [2024-11-05 03:28:34.048684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.759 [2024-11-05 03:28:34.152744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.759 [2024-11-05 03:28:34.152787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:10.759 [2024-11-05 03:28:34.152810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.145 ms 00:18:10.759 [2024-11-05 03:28:34.152824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.759 [2024-11-05 03:28:34.190930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.759 [2024-11-05 03:28:34.191090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:10.759 [2024-11-05 03:28:34.191119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.013 ms 00:18:10.759 [2024-11-05 03:28:34.191131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.759 [2024-11-05 03:28:34.227066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.759 [2024-11-05 03:28:34.227118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:10.759 [2024-11-05 03:28:34.227137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.926 ms 00:18:10.759 [2024-11-05 03:28:34.227147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.759 [2024-11-05 03:28:34.263347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.759 [2024-11-05 03:28:34.263489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:10.759 [2024-11-05 03:28:34.263516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.185 ms 00:18:10.759 [2024-11-05 03:28:34.263527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.759 [2024-11-05 03:28:34.263615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.759 [2024-11-05 03:28:34.263627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:10.759 [2024-11-05 03:28:34.263645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:10.759 [2024-11-05 03:28:34.263658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.759 [2024-11-05 03:28:34.263816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.759 [2024-11-05 03:28:34.263833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:10.759 [2024-11-05 03:28:34.263852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:10.759 [2024-11-05 03:28:34.263863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.759 [2024-11-05 03:28:34.265092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4887.186 ms, result 0 00:18:10.759 { 00:18:10.759 "name": "ftl0", 00:18:10.759 "uuid": "498ad51e-e684-4c84-a95e-b166d6376afc" 00:18:10.759 } 00:18:10.759 03:28:34 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:10.759 03:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:18:10.759 03:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:10.759 03:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:18:10.759 03:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:10.759 03:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:10.759 03:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:11.018 03:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:11.277 [ 00:18:11.277 { 00:18:11.277 "name": "ftl0", 00:18:11.277 "aliases": [ 00:18:11.277 "498ad51e-e684-4c84-a95e-b166d6376afc" 00:18:11.277 ], 00:18:11.277 "product_name": "FTL disk", 00:18:11.277 "block_size": 4096, 00:18:11.277 "num_blocks": 20971520, 00:18:11.277 "uuid": "498ad51e-e684-4c84-a95e-b166d6376afc", 00:18:11.277 "assigned_rate_limits": { 00:18:11.277 "rw_ios_per_sec": 0, 00:18:11.277 "rw_mbytes_per_sec": 0, 00:18:11.277 "r_mbytes_per_sec": 0, 00:18:11.277 "w_mbytes_per_sec": 0 00:18:11.277 }, 00:18:11.277 "claimed": false, 00:18:11.277 "zoned": false, 00:18:11.277 "supported_io_types": { 00:18:11.277 "read": true, 00:18:11.277 "write": true, 00:18:11.277 "unmap": true, 00:18:11.277 "flush": true, 00:18:11.277 "reset": false, 00:18:11.277 "nvme_admin": false, 00:18:11.277 "nvme_io": false, 00:18:11.277 "nvme_io_md": false, 00:18:11.277 "write_zeroes": true, 00:18:11.277 "zcopy": false, 00:18:11.277 "get_zone_info": false, 00:18:11.277 "zone_management": false, 00:18:11.277 "zone_append": false, 00:18:11.277 "compare": false, 00:18:11.277 "compare_and_write": false, 00:18:11.277 "abort": false, 00:18:11.277 "seek_hole": false, 00:18:11.277 "seek_data": false, 00:18:11.277 "copy": false, 00:18:11.277 "nvme_iov_md": false 00:18:11.277 }, 00:18:11.277 "driver_specific": { 00:18:11.277 "ftl": { 00:18:11.277 "base_bdev": "b065cd99-b782-4660-a462-2fbed21cd0ac", 00:18:11.277 "cache": "nvc0n1p0" 00:18:11.277 } 00:18:11.277 } 00:18:11.277 } 00:18:11.277 ] 00:18:11.277 03:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:18:11.277 03:28:34 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:11.277 03:28:34 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:11.535 03:28:34 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:11.536 03:28:34 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:11.536 [2024-11-05 03:28:35.090424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.536 [2024-11-05 03:28:35.090476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:11.536 [2024-11-05 03:28:35.090493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:11.536 [2024-11-05 03:28:35.090506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.536 [2024-11-05 03:28:35.090574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:11.536 [2024-11-05 03:28:35.094840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.536 [2024-11-05 03:28:35.094874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:11.536 [2024-11-05 03:28:35.094890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.247 ms 00:18:11.536 [2024-11-05 03:28:35.094901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.536 [2024-11-05 03:28:35.095815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.536 [2024-11-05 03:28:35.095841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:11.536 [2024-11-05 03:28:35.095856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:18:11.536 [2024-11-05 03:28:35.095866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.536 [2024-11-05 03:28:35.098446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.536 [2024-11-05 03:28:35.098473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:11.536 [2024-11-05 03:28:35.098488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.528 ms 00:18:11.536 [2024-11-05 03:28:35.098498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.536 [2024-11-05 03:28:35.103599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.536 [2024-11-05 03:28:35.103632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:11.536 [2024-11-05 03:28:35.103648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.060 ms 00:18:11.536 [2024-11-05 03:28:35.103658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.796 [2024-11-05 03:28:35.140807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.796 [2024-11-05 03:28:35.140843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:11.796 [2024-11-05 03:28:35.140860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.098 ms 00:18:11.796 [2024-11-05 03:28:35.140870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.796 [2024-11-05 03:28:35.163748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.796 [2024-11-05 03:28:35.163801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:11.796 [2024-11-05 03:28:35.163829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.823 ms 00:18:11.796 [2024-11-05 03:28:35.163843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.796 [2024-11-05 03:28:35.164174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.796 [2024-11-05 03:28:35.164188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:11.796 [2024-11-05 03:28:35.164202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:18:11.796 [2024-11-05 03:28:35.164212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.796 [2024-11-05 03:28:35.200394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.796 [2024-11-05 03:28:35.200429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:11.796 [2024-11-05 03:28:35.200446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.189 ms 00:18:11.796 [2024-11-05 03:28:35.200456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.796 [2024-11-05 03:28:35.236470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.796 [2024-11-05 03:28:35.236504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:11.796 [2024-11-05 03:28:35.236520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.009 ms 00:18:11.796 [2024-11-05 03:28:35.236530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.796 [2024-11-05 03:28:35.271677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.796 [2024-11-05 03:28:35.271722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:11.796 [2024-11-05 03:28:35.271739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.139 ms 00:18:11.796 [2024-11-05 03:28:35.271765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.796 [2024-11-05 03:28:35.307284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.796 [2024-11-05 03:28:35.307327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:11.796 [2024-11-05 03:28:35.307343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.403 ms 00:18:11.796 [2024-11-05 03:28:35.307353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.796 [2024-11-05 03:28:35.307413] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:11.796 [2024-11-05 03:28:35.307431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:11.796 [2024-11-05 03:28:35.307957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.307968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.307981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.307992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:11.797 [2024-11-05 03:28:35.308681] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:11.797 [2024-11-05 03:28:35.308693] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 498ad51e-e684-4c84-a95e-b166d6376afc 00:18:11.797 [2024-11-05 03:28:35.308705] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:11.797 [2024-11-05 03:28:35.308720] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:11.797 [2024-11-05 03:28:35.308729] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:11.797 [2024-11-05 03:28:35.308745] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:11.797 [2024-11-05 03:28:35.308755] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:11.797 [2024-11-05 03:28:35.308768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:11.797 [2024-11-05 03:28:35.308778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:11.797 [2024-11-05 03:28:35.308789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:11.797 [2024-11-05 03:28:35.308798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:11.797 [2024-11-05 03:28:35.308811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.797 [2024-11-05 03:28:35.308821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:11.797 [2024-11-05 03:28:35.308836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.402 ms 00:18:11.797 [2024-11-05 03:28:35.308846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.797 [2024-11-05 03:28:35.329453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.797 [2024-11-05 03:28:35.329489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:11.797 [2024-11-05 03:28:35.329505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.539 ms 00:18:11.797 [2024-11-05 03:28:35.329515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.797 [2024-11-05 03:28:35.330067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.797 [2024-11-05 03:28:35.330082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:11.797 [2024-11-05 03:28:35.330096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:18:11.797 [2024-11-05 03:28:35.330106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.057 [2024-11-05 03:28:35.399616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.057 [2024-11-05 03:28:35.399658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:12.057 [2024-11-05 03:28:35.399675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.057 [2024-11-05 03:28:35.399685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.057 [2024-11-05 03:28:35.399775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.057 [2024-11-05 03:28:35.399787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:12.057 [2024-11-05 03:28:35.399801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.057 [2024-11-05 03:28:35.399811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.057 [2024-11-05 03:28:35.399937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.057 [2024-11-05 03:28:35.399952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:12.057 [2024-11-05 03:28:35.399968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.057 [2024-11-05 03:28:35.399978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.057 [2024-11-05 03:28:35.400027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.057 [2024-11-05 03:28:35.400038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:12.057 [2024-11-05 03:28:35.400051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.057 [2024-11-05 03:28:35.400061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.057 [2024-11-05 03:28:35.531653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.057 [2024-11-05 03:28:35.531715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.057 [2024-11-05 03:28:35.531733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.057 [2024-11-05 03:28:35.531744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.057 [2024-11-05 03:28:35.631232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.057 [2024-11-05 03:28:35.631477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.057 [2024-11-05 03:28:35.631510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.057 [2024-11-05 03:28:35.631522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.057 [2024-11-05 03:28:35.631675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.057 [2024-11-05 03:28:35.631689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:12.057 [2024-11-05 03:28:35.631703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.057 [2024-11-05 03:28:35.631718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.057 [2024-11-05 03:28:35.631858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.057 [2024-11-05 03:28:35.631872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:12.057 [2024-11-05 03:28:35.631887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.057 [2024-11-05 03:28:35.631898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.057 [2024-11-05 03:28:35.632063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.057 [2024-11-05 03:28:35.632078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:12.057 [2024-11-05 03:28:35.632093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.057 [2024-11-05 03:28:35.632104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.057 [2024-11-05 03:28:35.632181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.057 [2024-11-05 03:28:35.632195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:12.057 [2024-11-05 03:28:35.632208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.057 [2024-11-05 03:28:35.632219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.057 [2024-11-05 03:28:35.632298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.057 [2024-11-05 03:28:35.632311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:12.057 [2024-11-05 03:28:35.632326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.057 [2024-11-05 03:28:35.632336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.057 [2024-11-05 03:28:35.632424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.057 [2024-11-05 03:28:35.632436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:12.057 [2024-11-05 03:28:35.632451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.057 [2024-11-05 03:28:35.632462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.057 [2024-11-05 03:28:35.632701] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 543.141 ms, result 0 00:18:12.057 true 00:18:12.316 03:28:35 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74202 00:18:12.316 03:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 74202 ']' 00:18:12.316 03:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 74202 00:18:12.316 03:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:18:12.316 03:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:12.316 03:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74202 00:18:12.316 killing process with pid 74202 00:18:12.316 03:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:12.316 03:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:12.316 03:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74202' 00:18:12.316 03:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 74202 00:18:12.316 03:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 74202 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:17.598 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:17.599 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:18:17.599 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:17.599 03:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:17.599 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:17.599 fio-3.35 00:18:17.599 Starting 1 thread 00:18:22.874 00:18:22.874 test: (groupid=0, jobs=1): err= 0: pid=74426: Tue Nov 5 03:28:46 2024 00:18:22.874 read: IOPS=904, BW=60.1MiB/s (63.0MB/s)(255MiB/4238msec) 00:18:22.874 slat (nsec): min=4392, max=34651, avg=7332.62, stdev=3296.35 00:18:22.874 clat (usec): min=310, max=742, avg=494.47, stdev=55.34 00:18:22.874 lat (usec): min=315, max=754, avg=501.81, stdev=56.48 00:18:22.874 clat percentiles (usec): 00:18:22.874 | 1.00th=[ 375], 5.00th=[ 396], 10.00th=[ 437], 20.00th=[ 453], 00:18:22.874 | 30.00th=[ 461], 40.00th=[ 474], 50.00th=[ 498], 60.00th=[ 515], 00:18:22.874 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 578], 00:18:22.874 | 99.00th=[ 644], 99.50th=[ 676], 99.90th=[ 701], 99.95th=[ 725], 00:18:22.874 | 99.99th=[ 742] 00:18:22.874 write: IOPS=910, BW=60.5MiB/s (63.4MB/s)(256MiB/4234msec); 0 zone resets 00:18:22.874 slat (usec): min=15, max=434, avg=22.39, stdev= 9.37 00:18:22.874 clat (usec): min=358, max=1059, avg=566.75, stdev=76.19 00:18:22.874 lat (usec): min=375, max=1094, avg=589.14, stdev=78.93 00:18:22.874 clat percentiles (usec): 00:18:22.874 | 1.00th=[ 420], 5.00th=[ 465], 10.00th=[ 478], 20.00th=[ 515], 00:18:22.874 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 562], 60.00th=[ 578], 00:18:22.874 | 70.00th=[ 594], 80.00th=[ 611], 90.00th=[ 652], 95.00th=[ 660], 00:18:22.874 | 99.00th=[ 906], 99.50th=[ 955], 99.90th=[ 1045], 99.95th=[ 1045], 00:18:22.874 | 99.99th=[ 1057] 00:18:22.874 bw ( KiB/s): min=56712, max=66096, per=99.81%, avg=61812.00, stdev=3785.03, samples=8 00:18:22.874 iops : min= 834, max= 972, avg=909.00, stdev=55.66, samples=8 00:18:22.874 lat (usec) : 500=33.83%, 750=65.17%, 1000=0.86% 00:18:22.874 lat (msec) : 2=0.14% 00:18:22.874 cpu : usr=99.24%, sys=0.14%, ctx=9, majf=0, minf=1169 00:18:22.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:22.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.874 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:22.874 00:18:22.874 Run status group 0 (all jobs): 00:18:22.874 READ: bw=60.1MiB/s (63.0MB/s), 60.1MiB/s-60.1MiB/s (63.0MB/s-63.0MB/s), io=255MiB (267MB), run=4238-4238msec 00:18:22.874 WRITE: bw=60.5MiB/s (63.4MB/s), 60.5MiB/s-60.5MiB/s (63.4MB/s-63.4MB/s), io=256MiB (269MB), run=4234-4234msec 00:18:24.779 ----------------------------------------------------- 00:18:24.779 Suppressions used: 00:18:24.779 count bytes template 00:18:24.779 1 5 /usr/src/fio/parse.c 00:18:24.779 1 8 libtcmalloc_minimal.so 00:18:24.779 1 904 libcrypto.so 00:18:24.779 ----------------------------------------------------- 00:18:24.779 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:24.779 03:28:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:25.038 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:25.038 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:25.038 fio-3.35 00:18:25.038 Starting 2 threads 00:18:51.583 00:18:51.583 first_half: (groupid=0, jobs=1): err= 0: pid=74539: Tue Nov 5 03:29:14 2024 00:18:51.583 read: IOPS=2655, BW=10.4MiB/s (10.9MB/s)(255MiB/24571msec) 00:18:51.583 slat (nsec): min=3432, max=44179, avg=6360.71, stdev=2838.84 00:18:51.583 clat (usec): min=937, max=272017, avg=36669.74, stdev=18671.22 00:18:51.583 lat (usec): min=941, max=272021, avg=36676.10, stdev=18671.55 00:18:51.583 clat percentiles (msec): 00:18:51.583 | 1.00th=[ 7], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:18:51.583 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:18:51.583 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 41], 95.00th=[ 53], 00:18:51.583 | 99.00th=[ 140], 99.50th=[ 167], 99.90th=[ 192], 99.95th=[ 222], 00:18:51.583 | 99.99th=[ 262] 00:18:51.583 write: IOPS=3394, BW=13.3MiB/s (13.9MB/s)(256MiB/19307msec); 0 zone resets 00:18:51.583 slat (usec): min=4, max=666, avg= 8.29, stdev= 7.15 00:18:51.583 clat (usec): min=404, max=97982, avg=11445.67, stdev=20284.83 00:18:51.583 lat (usec): min=432, max=97991, avg=11453.96, stdev=20285.18 00:18:51.583 clat percentiles (usec): 00:18:51.583 | 1.00th=[ 1057], 5.00th=[ 1385], 10.00th=[ 1614], 20.00th=[ 1958], 00:18:51.583 | 30.00th=[ 2835], 40.00th=[ 4817], 50.00th=[ 5669], 60.00th=[ 6521], 00:18:51.583 | 70.00th=[ 7767], 80.00th=[10683], 90.00th=[13435], 95.00th=[77071], 00:18:51.583 | 99.00th=[89654], 99.50th=[91751], 99.90th=[95945], 99.95th=[96994], 00:18:51.583 | 99.99th=[98042] 00:18:51.583 bw ( KiB/s): min= 544, max=41048, per=85.09%, avg=20971.52, stdev=12690.13, samples=25 00:18:51.583 iops : min= 136, max=10262, avg=5242.88, stdev=3172.53, samples=25 00:18:51.583 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.28% 00:18:51.583 lat (msec) : 2=10.29%, 4=7.56%, 10=21.27%, 20=7.36%, 50=46.68% 00:18:51.583 lat (msec) : 100=5.35%, 250=1.15%, 500=0.01% 00:18:51.583 cpu : usr=99.19%, sys=0.21%, ctx=41, majf=0, minf=5593 00:18:51.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:51.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.583 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:51.583 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:51.583 second_half: (groupid=0, jobs=1): err= 0: pid=74540: Tue Nov 5 03:29:14 2024 00:18:51.583 read: IOPS=2637, BW=10.3MiB/s (10.8MB/s)(255MiB/24735msec) 00:18:51.583 slat (nsec): min=3418, max=56674, avg=6180.08, stdev=2711.54 00:18:51.583 clat (usec): min=928, max=278204, avg=35918.18, stdev=20161.46 00:18:51.583 lat (usec): min=936, max=278210, avg=35924.36, stdev=20161.75 00:18:51.583 clat percentiles (msec): 00:18:51.583 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 32], 00:18:51.583 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:18:51.583 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 46], 00:18:51.583 | 99.00th=[ 144], 99.50th=[ 171], 99.90th=[ 203], 99.95th=[ 245], 00:18:51.583 | 99.99th=[ 271] 00:18:51.583 write: IOPS=3080, BW=12.0MiB/s (12.6MB/s)(256MiB/21273msec); 0 zone resets 00:18:51.583 slat (usec): min=4, max=365, avg= 8.59, stdev= 4.98 00:18:51.583 clat (usec): min=380, max=99464, avg=12524.92, stdev=21345.87 00:18:51.583 lat (usec): min=390, max=99471, avg=12533.51, stdev=21346.38 00:18:51.583 clat percentiles (usec): 00:18:51.583 | 1.00th=[ 988], 5.00th=[ 1303], 10.00th=[ 1516], 20.00th=[ 1844], 00:18:51.583 | 30.00th=[ 2311], 40.00th=[ 4293], 50.00th=[ 5735], 60.00th=[ 6718], 00:18:51.583 | 70.00th=[ 8094], 80.00th=[11338], 90.00th=[35914], 95.00th=[79168], 00:18:51.583 | 99.00th=[90702], 99.50th=[92799], 99.90th=[96994], 99.95th=[98042], 00:18:51.583 | 99.99th=[98042] 00:18:51.583 bw ( KiB/s): min= 31, max=53440, per=81.80%, avg=20161.35, stdev=14058.62, samples=26 00:18:51.583 iops : min= 7, max=13360, avg=5040.27, stdev=3514.69, samples=26 00:18:51.583 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.45% 00:18:51.583 lat (msec) : 2=11.80%, 4=7.07%, 10=20.07%, 20=6.72%, 50=47.67% 00:18:51.583 lat (msec) : 100=4.85%, 250=1.27%, 500=0.02% 00:18:51.583 cpu : usr=99.20%, sys=0.22%, ctx=32, majf=0, minf=5524 00:18:51.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:51.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.583 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:51.583 issued rwts: total=65250,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:51.583 00:18:51.583 Run status group 0 (all jobs): 00:18:51.583 READ: bw=20.6MiB/s (21.6MB/s), 10.3MiB/s-10.4MiB/s (10.8MB/s-10.9MB/s), io=510MiB (534MB), run=24571-24735msec 00:18:51.583 WRITE: bw=24.1MiB/s (25.2MB/s), 12.0MiB/s-13.3MiB/s (12.6MB/s-13.9MB/s), io=512MiB (537MB), run=19307-21273msec 00:18:53.492 ----------------------------------------------------- 00:18:53.492 Suppressions used: 00:18:53.492 count bytes template 00:18:53.492 2 10 /usr/src/fio/parse.c 00:18:53.492 2 192 /usr/src/fio/iolog.c 00:18:53.492 1 8 libtcmalloc_minimal.so 00:18:53.492 1 904 libcrypto.so 00:18:53.492 ----------------------------------------------------- 00:18:53.492 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:18:53.492 03:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:53.492 03:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:53.492 03:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:53.492 03:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:18:53.492 03:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:53.492 03:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:53.752 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:53.752 fio-3.35 00:18:53.752 Starting 1 thread 00:19:08.666 00:19:08.666 test: (groupid=0, jobs=1): err= 0: pid=74860: Tue Nov 5 03:29:32 2024 00:19:08.666 read: IOPS=7641, BW=29.8MiB/s (31.3MB/s)(255MiB/8533msec) 00:19:08.666 slat (nsec): min=3375, max=53885, avg=5464.61, stdev=2365.63 00:19:08.666 clat (usec): min=615, max=39082, avg=16742.04, stdev=1640.07 00:19:08.666 lat (usec): min=628, max=39091, avg=16747.51, stdev=1640.73 00:19:08.666 clat percentiles (usec): 00:19:08.666 | 1.00th=[15139], 5.00th=[15401], 10.00th=[15664], 20.00th=[15795], 00:19:08.666 | 30.00th=[15926], 40.00th=[16057], 50.00th=[16319], 60.00th=[16450], 00:19:08.666 | 70.00th=[16712], 80.00th=[16909], 90.00th=[19792], 95.00th=[20317], 00:19:08.666 | 99.00th=[21103], 99.50th=[21365], 99.90th=[29230], 99.95th=[33817], 00:19:08.666 | 99.99th=[38011] 00:19:08.666 write: IOPS=13.3k, BW=52.1MiB/s (54.6MB/s)(256MiB/4915msec); 0 zone resets 00:19:08.666 slat (usec): min=4, max=799, avg= 7.63, stdev= 7.79 00:19:08.666 clat (usec): min=558, max=54777, avg=9551.52, stdev=11574.28 00:19:08.666 lat (usec): min=566, max=54788, avg=9559.15, stdev=11574.27 00:19:08.666 clat percentiles (usec): 00:19:08.666 | 1.00th=[ 930], 5.00th=[ 1123], 10.00th=[ 1270], 20.00th=[ 1467], 00:19:08.666 | 30.00th=[ 1647], 40.00th=[ 1942], 50.00th=[ 6128], 60.00th=[ 7373], 00:19:08.666 | 70.00th=[ 8455], 80.00th=[10814], 90.00th=[34341], 95.00th=[35914], 00:19:08.666 | 99.00th=[38011], 99.50th=[38536], 99.90th=[40633], 99.95th=[44827], 00:19:08.666 | 99.99th=[51119] 00:19:08.666 bw ( KiB/s): min=37816, max=71912, per=98.30%, avg=52428.00, stdev=10072.03, samples=10 00:19:08.666 iops : min= 9454, max=17978, avg=13107.20, stdev=2517.93, samples=10 00:19:08.666 lat (usec) : 750=0.06%, 1000=0.92% 00:19:08.666 lat (msec) : 2=19.38%, 4=0.80%, 10=17.71%, 20=48.66%, 50=12.48% 00:19:08.666 lat (msec) : 100=0.01% 00:19:08.666 cpu : usr=98.66%, sys=0.56%, ctx=25, majf=0, minf=5565 00:19:08.666 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:08.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.666 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.666 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.666 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.666 00:19:08.666 Run status group 0 (all jobs): 00:19:08.666 READ: bw=29.8MiB/s (31.3MB/s), 29.8MiB/s-29.8MiB/s (31.3MB/s-31.3MB/s), io=255MiB (267MB), run=8533-8533msec 00:19:08.666 WRITE: bw=52.1MiB/s (54.6MB/s), 52.1MiB/s-52.1MiB/s (54.6MB/s-54.6MB/s), io=256MiB (268MB), run=4915-4915msec 00:19:10.572 ----------------------------------------------------- 00:19:10.572 Suppressions used: 00:19:10.572 count bytes template 00:19:10.572 1 5 /usr/src/fio/parse.c 00:19:10.572 2 192 /usr/src/fio/iolog.c 00:19:10.572 1 8 libtcmalloc_minimal.so 00:19:10.572 1 904 libcrypto.so 00:19:10.572 ----------------------------------------------------- 00:19:10.572 00:19:10.832 03:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:10.832 03:29:34 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:10.832 03:29:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:10.832 03:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:10.832 Remove shared memory files 00:19:10.832 03:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:10.832 03:29:34 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:10.832 03:29:34 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:10.832 03:29:34 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:10.832 03:29:34 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57806 /dev/shm/spdk_tgt_trace.pid73093 00:19:10.832 03:29:34 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:10.832 03:29:34 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:10.832 ************************************ 00:19:10.832 END TEST ftl_fio_basic 00:19:10.832 ************************************ 00:19:10.832 00:19:10.832 real 1m9.178s 00:19:10.832 user 2m29.643s 00:19:10.832 sys 0m3.941s 00:19:10.832 03:29:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:10.832 03:29:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:10.832 03:29:34 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:10.832 03:29:34 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:10.832 03:29:34 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:10.832 03:29:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:10.832 ************************************ 00:19:10.832 START TEST ftl_bdevperf 00:19:10.832 ************************************ 00:19:10.832 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:11.092 * Looking for test storage... 00:19:11.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:11.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.092 --rc genhtml_branch_coverage=1 00:19:11.092 --rc genhtml_function_coverage=1 00:19:11.092 --rc genhtml_legend=1 00:19:11.092 --rc geninfo_all_blocks=1 00:19:11.092 --rc geninfo_unexecuted_blocks=1 00:19:11.092 00:19:11.092 ' 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:11.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.092 --rc genhtml_branch_coverage=1 00:19:11.092 --rc genhtml_function_coverage=1 00:19:11.092 --rc genhtml_legend=1 00:19:11.092 --rc geninfo_all_blocks=1 00:19:11.092 --rc geninfo_unexecuted_blocks=1 00:19:11.092 00:19:11.092 ' 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:11.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.092 --rc genhtml_branch_coverage=1 00:19:11.092 --rc genhtml_function_coverage=1 00:19:11.092 --rc genhtml_legend=1 00:19:11.092 --rc geninfo_all_blocks=1 00:19:11.092 --rc geninfo_unexecuted_blocks=1 00:19:11.092 00:19:11.092 ' 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:11.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.092 --rc genhtml_branch_coverage=1 00:19:11.092 --rc genhtml_function_coverage=1 00:19:11.092 --rc genhtml_legend=1 00:19:11.092 --rc geninfo_all_blocks=1 00:19:11.092 --rc geninfo_unexecuted_blocks=1 00:19:11.092 00:19:11.092 ' 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75103 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75103 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 75103 ']' 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:11.092 03:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:11.092 [2024-11-05 03:29:34.674479] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:19:11.352 [2024-11-05 03:29:34.675336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75103 ] 00:19:11.352 [2024-11-05 03:29:34.859893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.611 [2024-11-05 03:29:34.974375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.180 03:29:35 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:12.180 03:29:35 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:19:12.180 03:29:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:12.180 03:29:35 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:12.180 03:29:35 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:12.180 03:29:35 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:12.180 03:29:35 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:12.180 03:29:35 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:12.439 03:29:35 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:12.439 03:29:35 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:12.439 03:29:35 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:12.439 03:29:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:19:12.439 03:29:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:12.439 03:29:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:12.439 03:29:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:12.439 03:29:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:12.439 03:29:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:12.439 { 00:19:12.439 "name": "nvme0n1", 00:19:12.439 "aliases": [ 00:19:12.439 "6dcb6502-6767-4c65-b626-81d7443d43a6" 00:19:12.439 ], 00:19:12.439 "product_name": "NVMe disk", 00:19:12.439 "block_size": 4096, 00:19:12.439 "num_blocks": 1310720, 00:19:12.439 "uuid": "6dcb6502-6767-4c65-b626-81d7443d43a6", 00:19:12.439 "numa_id": -1, 00:19:12.439 "assigned_rate_limits": { 00:19:12.439 "rw_ios_per_sec": 0, 00:19:12.439 "rw_mbytes_per_sec": 0, 00:19:12.439 "r_mbytes_per_sec": 0, 00:19:12.439 "w_mbytes_per_sec": 0 00:19:12.439 }, 00:19:12.439 "claimed": true, 00:19:12.439 "claim_type": "read_many_write_one", 00:19:12.439 "zoned": false, 00:19:12.439 "supported_io_types": { 00:19:12.439 "read": true, 00:19:12.439 "write": true, 00:19:12.439 "unmap": true, 00:19:12.439 "flush": true, 00:19:12.439 "reset": true, 00:19:12.439 "nvme_admin": true, 00:19:12.439 "nvme_io": true, 00:19:12.439 "nvme_io_md": false, 00:19:12.439 "write_zeroes": true, 00:19:12.439 "zcopy": false, 00:19:12.439 "get_zone_info": false, 00:19:12.439 "zone_management": false, 00:19:12.439 "zone_append": false, 00:19:12.439 "compare": true, 00:19:12.439 "compare_and_write": false, 00:19:12.439 "abort": true, 00:19:12.439 "seek_hole": false, 00:19:12.439 "seek_data": false, 00:19:12.439 "copy": true, 00:19:12.439 "nvme_iov_md": false 00:19:12.439 }, 00:19:12.439 "driver_specific": { 00:19:12.439 "nvme": [ 00:19:12.439 { 00:19:12.439 "pci_address": "0000:00:11.0", 00:19:12.439 "trid": { 00:19:12.439 "trtype": "PCIe", 00:19:12.439 "traddr": "0000:00:11.0" 00:19:12.439 }, 00:19:12.439 "ctrlr_data": { 00:19:12.439 "cntlid": 0, 00:19:12.439 "vendor_id": "0x1b36", 00:19:12.439 "model_number": "QEMU NVMe Ctrl", 00:19:12.439 "serial_number": "12341", 00:19:12.439 "firmware_revision": "8.0.0", 00:19:12.439 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:12.439 "oacs": { 00:19:12.440 "security": 0, 00:19:12.440 "format": 1, 00:19:12.440 "firmware": 0, 00:19:12.440 "ns_manage": 1 00:19:12.440 }, 00:19:12.440 "multi_ctrlr": false, 00:19:12.440 "ana_reporting": false 00:19:12.440 }, 00:19:12.440 "vs": { 00:19:12.440 "nvme_version": "1.4" 00:19:12.440 }, 00:19:12.440 "ns_data": { 00:19:12.440 "id": 1, 00:19:12.440 "can_share": false 00:19:12.440 } 00:19:12.440 } 00:19:12.440 ], 00:19:12.440 "mp_policy": "active_passive" 00:19:12.440 } 00:19:12.440 } 00:19:12.440 ]' 00:19:12.440 03:29:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:12.700 03:29:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:12.700 03:29:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:12.700 03:29:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:19:12.700 03:29:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:19:12.700 03:29:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:19:12.700 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:12.700 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:12.700 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:12.700 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:12.700 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:12.959 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=3388ebc2-2378-47e5-a70b-60b197424d3f 00:19:12.959 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:12.959 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3388ebc2-2378-47e5-a70b-60b197424d3f 00:19:12.959 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:13.219 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=76b65df6-c806-4ebb-8bd3-e5e7ad1740de 00:19:13.219 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 76b65df6-c806-4ebb-8bd3-e5e7ad1740de 00:19:13.478 03:29:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=37aab4d1-8486-48d9-8a53-219ca3ea4004 00:19:13.478 03:29:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 37aab4d1-8486-48d9-8a53-219ca3ea4004 00:19:13.478 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:13.478 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:13.478 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=37aab4d1-8486-48d9-8a53-219ca3ea4004 00:19:13.478 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:13.478 03:29:36 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 37aab4d1-8486-48d9-8a53-219ca3ea4004 00:19:13.478 03:29:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=37aab4d1-8486-48d9-8a53-219ca3ea4004 00:19:13.478 03:29:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:13.478 03:29:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:13.478 03:29:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:13.478 03:29:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 37aab4d1-8486-48d9-8a53-219ca3ea4004 00:19:13.737 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:13.737 { 00:19:13.737 "name": "37aab4d1-8486-48d9-8a53-219ca3ea4004", 00:19:13.737 "aliases": [ 00:19:13.737 "lvs/nvme0n1p0" 00:19:13.737 ], 00:19:13.738 "product_name": "Logical Volume", 00:19:13.738 "block_size": 4096, 00:19:13.738 "num_blocks": 26476544, 00:19:13.738 "uuid": "37aab4d1-8486-48d9-8a53-219ca3ea4004", 00:19:13.738 "assigned_rate_limits": { 00:19:13.738 "rw_ios_per_sec": 0, 00:19:13.738 "rw_mbytes_per_sec": 0, 00:19:13.738 "r_mbytes_per_sec": 0, 00:19:13.738 "w_mbytes_per_sec": 0 00:19:13.738 }, 00:19:13.738 "claimed": false, 00:19:13.738 "zoned": false, 00:19:13.738 "supported_io_types": { 00:19:13.738 "read": true, 00:19:13.738 "write": true, 00:19:13.738 "unmap": true, 00:19:13.738 "flush": false, 00:19:13.738 "reset": true, 00:19:13.738 "nvme_admin": false, 00:19:13.738 "nvme_io": false, 00:19:13.738 "nvme_io_md": false, 00:19:13.738 "write_zeroes": true, 00:19:13.738 "zcopy": false, 00:19:13.738 "get_zone_info": false, 00:19:13.738 "zone_management": false, 00:19:13.738 "zone_append": false, 00:19:13.738 "compare": false, 00:19:13.738 "compare_and_write": false, 00:19:13.738 "abort": false, 00:19:13.738 "seek_hole": true, 00:19:13.738 "seek_data": true, 00:19:13.738 "copy": false, 00:19:13.738 "nvme_iov_md": false 00:19:13.738 }, 00:19:13.738 "driver_specific": { 00:19:13.738 "lvol": { 00:19:13.738 "lvol_store_uuid": "76b65df6-c806-4ebb-8bd3-e5e7ad1740de", 00:19:13.738 "base_bdev": "nvme0n1", 00:19:13.738 "thin_provision": true, 00:19:13.738 "num_allocated_clusters": 0, 00:19:13.738 "snapshot": false, 00:19:13.738 "clone": false, 00:19:13.738 "esnap_clone": false 00:19:13.738 } 00:19:13.738 } 00:19:13.738 } 00:19:13.738 ]' 00:19:13.738 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:13.738 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:13.738 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:13.738 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:13.738 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:13.738 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:19:13.738 03:29:37 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:13.738 03:29:37 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:13.738 03:29:37 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:13.997 03:29:37 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:13.997 03:29:37 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:13.997 03:29:37 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 37aab4d1-8486-48d9-8a53-219ca3ea4004 00:19:13.997 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=37aab4d1-8486-48d9-8a53-219ca3ea4004 00:19:13.997 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:13.997 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:13.997 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:13.997 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 37aab4d1-8486-48d9-8a53-219ca3ea4004 00:19:14.256 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:14.256 { 00:19:14.256 "name": "37aab4d1-8486-48d9-8a53-219ca3ea4004", 00:19:14.256 "aliases": [ 00:19:14.256 "lvs/nvme0n1p0" 00:19:14.256 ], 00:19:14.256 "product_name": "Logical Volume", 00:19:14.256 "block_size": 4096, 00:19:14.256 "num_blocks": 26476544, 00:19:14.256 "uuid": "37aab4d1-8486-48d9-8a53-219ca3ea4004", 00:19:14.256 "assigned_rate_limits": { 00:19:14.256 "rw_ios_per_sec": 0, 00:19:14.256 "rw_mbytes_per_sec": 0, 00:19:14.256 "r_mbytes_per_sec": 0, 00:19:14.256 "w_mbytes_per_sec": 0 00:19:14.256 }, 00:19:14.256 "claimed": false, 00:19:14.256 "zoned": false, 00:19:14.256 "supported_io_types": { 00:19:14.256 "read": true, 00:19:14.256 "write": true, 00:19:14.256 "unmap": true, 00:19:14.256 "flush": false, 00:19:14.256 "reset": true, 00:19:14.256 "nvme_admin": false, 00:19:14.256 "nvme_io": false, 00:19:14.256 "nvme_io_md": false, 00:19:14.256 "write_zeroes": true, 00:19:14.256 "zcopy": false, 00:19:14.256 "get_zone_info": false, 00:19:14.256 "zone_management": false, 00:19:14.256 "zone_append": false, 00:19:14.256 "compare": false, 00:19:14.256 "compare_and_write": false, 00:19:14.256 "abort": false, 00:19:14.256 "seek_hole": true, 00:19:14.256 "seek_data": true, 00:19:14.256 "copy": false, 00:19:14.256 "nvme_iov_md": false 00:19:14.256 }, 00:19:14.256 "driver_specific": { 00:19:14.256 "lvol": { 00:19:14.256 "lvol_store_uuid": "76b65df6-c806-4ebb-8bd3-e5e7ad1740de", 00:19:14.256 "base_bdev": "nvme0n1", 00:19:14.256 "thin_provision": true, 00:19:14.256 "num_allocated_clusters": 0, 00:19:14.256 "snapshot": false, 00:19:14.256 "clone": false, 00:19:14.256 "esnap_clone": false 00:19:14.256 } 00:19:14.256 } 00:19:14.256 } 00:19:14.256 ]' 00:19:14.257 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:14.257 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:14.257 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:14.257 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:14.257 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:14.257 03:29:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:19:14.257 03:29:37 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:14.257 03:29:37 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:14.516 03:29:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:14.516 03:29:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 37aab4d1-8486-48d9-8a53-219ca3ea4004 00:19:14.516 03:29:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=37aab4d1-8486-48d9-8a53-219ca3ea4004 00:19:14.516 03:29:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:14.516 03:29:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:14.516 03:29:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:14.516 03:29:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 37aab4d1-8486-48d9-8a53-219ca3ea4004 00:19:14.775 03:29:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:14.775 { 00:19:14.775 "name": "37aab4d1-8486-48d9-8a53-219ca3ea4004", 00:19:14.775 "aliases": [ 00:19:14.775 "lvs/nvme0n1p0" 00:19:14.775 ], 00:19:14.775 "product_name": "Logical Volume", 00:19:14.775 "block_size": 4096, 00:19:14.775 "num_blocks": 26476544, 00:19:14.775 "uuid": "37aab4d1-8486-48d9-8a53-219ca3ea4004", 00:19:14.775 "assigned_rate_limits": { 00:19:14.775 "rw_ios_per_sec": 0, 00:19:14.775 "rw_mbytes_per_sec": 0, 00:19:14.775 "r_mbytes_per_sec": 0, 00:19:14.775 "w_mbytes_per_sec": 0 00:19:14.775 }, 00:19:14.775 "claimed": false, 00:19:14.775 "zoned": false, 00:19:14.775 "supported_io_types": { 00:19:14.775 "read": true, 00:19:14.775 "write": true, 00:19:14.776 "unmap": true, 00:19:14.776 "flush": false, 00:19:14.776 "reset": true, 00:19:14.776 "nvme_admin": false, 00:19:14.776 "nvme_io": false, 00:19:14.776 "nvme_io_md": false, 00:19:14.776 "write_zeroes": true, 00:19:14.776 "zcopy": false, 00:19:14.776 "get_zone_info": false, 00:19:14.776 "zone_management": false, 00:19:14.776 "zone_append": false, 00:19:14.776 "compare": false, 00:19:14.776 "compare_and_write": false, 00:19:14.776 "abort": false, 00:19:14.776 "seek_hole": true, 00:19:14.776 "seek_data": true, 00:19:14.776 "copy": false, 00:19:14.776 "nvme_iov_md": false 00:19:14.776 }, 00:19:14.776 "driver_specific": { 00:19:14.776 "lvol": { 00:19:14.776 "lvol_store_uuid": "76b65df6-c806-4ebb-8bd3-e5e7ad1740de", 00:19:14.776 "base_bdev": "nvme0n1", 00:19:14.776 "thin_provision": true, 00:19:14.776 "num_allocated_clusters": 0, 00:19:14.776 "snapshot": false, 00:19:14.776 "clone": false, 00:19:14.776 "esnap_clone": false 00:19:14.776 } 00:19:14.776 } 00:19:14.776 } 00:19:14.776 ]' 00:19:14.776 03:29:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:14.776 03:29:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:14.776 03:29:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:14.776 03:29:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:14.776 03:29:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:14.776 03:29:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:19:14.776 03:29:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:14.776 03:29:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 37aab4d1-8486-48d9-8a53-219ca3ea4004 -c nvc0n1p0 --l2p_dram_limit 20 00:19:15.036 [2024-11-05 03:29:38.497710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.036 [2024-11-05 03:29:38.497804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:15.036 [2024-11-05 03:29:38.497827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:15.036 [2024-11-05 03:29:38.497844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.036 [2024-11-05 03:29:38.497933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.036 [2024-11-05 03:29:38.497957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:15.036 [2024-11-05 03:29:38.497972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:15.036 [2024-11-05 03:29:38.497990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.036 [2024-11-05 03:29:38.498015] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:15.036 [2024-11-05 03:29:38.499276] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:15.036 [2024-11-05 03:29:38.499322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.036 [2024-11-05 03:29:38.499344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:15.036 [2024-11-05 03:29:38.499360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:19:15.036 [2024-11-05 03:29:38.499380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.036 [2024-11-05 03:29:38.499556] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d0742f54-d63d-4e19-94c7-c40b919960a9 00:19:15.036 [2024-11-05 03:29:38.502032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.036 [2024-11-05 03:29:38.502077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:15.036 [2024-11-05 03:29:38.502096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:15.036 [2024-11-05 03:29:38.502119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.036 [2024-11-05 03:29:38.515799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.036 [2024-11-05 03:29:38.515844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:15.036 [2024-11-05 03:29:38.515868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.612 ms 00:19:15.036 [2024-11-05 03:29:38.515885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.036 [2024-11-05 03:29:38.516031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.036 [2024-11-05 03:29:38.516051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:15.036 [2024-11-05 03:29:38.516077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:15.036 [2024-11-05 03:29:38.516093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.036 [2024-11-05 03:29:38.516176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.036 [2024-11-05 03:29:38.516195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:15.036 [2024-11-05 03:29:38.516215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:15.036 [2024-11-05 03:29:38.516231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.036 [2024-11-05 03:29:38.516271] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:15.036 [2024-11-05 03:29:38.522841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.036 [2024-11-05 03:29:38.522889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:15.036 [2024-11-05 03:29:38.522905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.596 ms 00:19:15.036 [2024-11-05 03:29:38.522923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.036 [2024-11-05 03:29:38.522967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.036 [2024-11-05 03:29:38.522985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:15.036 [2024-11-05 03:29:38.522999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:15.036 [2024-11-05 03:29:38.523015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.037 [2024-11-05 03:29:38.523058] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:15.037 [2024-11-05 03:29:38.523211] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:15.037 [2024-11-05 03:29:38.523229] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:15.037 [2024-11-05 03:29:38.523249] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:15.037 [2024-11-05 03:29:38.523265] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:15.037 [2024-11-05 03:29:38.523283] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:15.037 [2024-11-05 03:29:38.523310] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:15.037 [2024-11-05 03:29:38.523327] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:15.037 [2024-11-05 03:29:38.523339] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:15.037 [2024-11-05 03:29:38.523355] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:15.037 [2024-11-05 03:29:38.523368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.037 [2024-11-05 03:29:38.523390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:15.037 [2024-11-05 03:29:38.523402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:19:15.037 [2024-11-05 03:29:38.523419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.037 [2024-11-05 03:29:38.523495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.037 [2024-11-05 03:29:38.523515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:15.037 [2024-11-05 03:29:38.523527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:15.037 [2024-11-05 03:29:38.523546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.037 [2024-11-05 03:29:38.523634] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:15.037 [2024-11-05 03:29:38.523653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:15.037 [2024-11-05 03:29:38.523671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:15.037 [2024-11-05 03:29:38.523687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.037 [2024-11-05 03:29:38.523701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:15.037 [2024-11-05 03:29:38.523716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:15.037 [2024-11-05 03:29:38.523728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:15.037 [2024-11-05 03:29:38.523744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:15.037 [2024-11-05 03:29:38.523756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:15.037 [2024-11-05 03:29:38.523771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:15.037 [2024-11-05 03:29:38.523782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:15.037 [2024-11-05 03:29:38.523814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:15.037 [2024-11-05 03:29:38.523827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:15.037 [2024-11-05 03:29:38.523866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:15.037 [2024-11-05 03:29:38.523887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:15.037 [2024-11-05 03:29:38.523910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.037 [2024-11-05 03:29:38.523922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:15.037 [2024-11-05 03:29:38.523937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:15.037 [2024-11-05 03:29:38.523949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.037 [2024-11-05 03:29:38.523967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:15.037 [2024-11-05 03:29:38.523979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:15.037 [2024-11-05 03:29:38.523994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.037 [2024-11-05 03:29:38.524006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:15.037 [2024-11-05 03:29:38.524022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:15.037 [2024-11-05 03:29:38.524033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.037 [2024-11-05 03:29:38.524048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:15.037 [2024-11-05 03:29:38.524059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:15.037 [2024-11-05 03:29:38.524074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.037 [2024-11-05 03:29:38.524086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:15.037 [2024-11-05 03:29:38.524101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:15.037 [2024-11-05 03:29:38.524113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.037 [2024-11-05 03:29:38.524131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:15.037 [2024-11-05 03:29:38.524143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:15.037 [2024-11-05 03:29:38.524157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:15.037 [2024-11-05 03:29:38.524169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:15.037 [2024-11-05 03:29:38.524184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:15.037 [2024-11-05 03:29:38.524196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:15.037 [2024-11-05 03:29:38.524211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:15.037 [2024-11-05 03:29:38.524223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:15.037 [2024-11-05 03:29:38.524237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.037 [2024-11-05 03:29:38.524248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:15.037 [2024-11-05 03:29:38.524263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:15.037 [2024-11-05 03:29:38.524273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.037 [2024-11-05 03:29:38.524300] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:15.037 [2024-11-05 03:29:38.524315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:15.037 [2024-11-05 03:29:38.524331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:15.037 [2024-11-05 03:29:38.524347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.037 [2024-11-05 03:29:38.524368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:15.037 [2024-11-05 03:29:38.524380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:15.037 [2024-11-05 03:29:38.524396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:15.037 [2024-11-05 03:29:38.524408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:15.037 [2024-11-05 03:29:38.524423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:15.037 [2024-11-05 03:29:38.524442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:15.037 [2024-11-05 03:29:38.524473] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:15.037 [2024-11-05 03:29:38.524499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:15.037 [2024-11-05 03:29:38.524528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:15.037 [2024-11-05 03:29:38.524550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:15.037 [2024-11-05 03:29:38.524569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:15.037 [2024-11-05 03:29:38.524582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:15.037 [2024-11-05 03:29:38.524598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:15.037 [2024-11-05 03:29:38.524612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:15.037 [2024-11-05 03:29:38.524628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:15.037 [2024-11-05 03:29:38.524640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:15.037 [2024-11-05 03:29:38.524660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:15.037 [2024-11-05 03:29:38.524672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:15.037 [2024-11-05 03:29:38.524688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:15.037 [2024-11-05 03:29:38.524702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:15.037 [2024-11-05 03:29:38.524718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:15.037 [2024-11-05 03:29:38.524732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:15.037 [2024-11-05 03:29:38.524748] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:15.037 [2024-11-05 03:29:38.524762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:15.037 [2024-11-05 03:29:38.524781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:15.037 [2024-11-05 03:29:38.524794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:15.037 [2024-11-05 03:29:38.524811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:15.037 [2024-11-05 03:29:38.524824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:15.037 [2024-11-05 03:29:38.524842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.037 [2024-11-05 03:29:38.524858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:15.037 [2024-11-05 03:29:38.524876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:19:15.037 [2024-11-05 03:29:38.524890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.037 [2024-11-05 03:29:38.524946] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:15.038 [2024-11-05 03:29:38.524970] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:19.267 [2024-11-05 03:29:41.992952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:41.993042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:19.268 [2024-11-05 03:29:41.993074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3473.636 ms 00:19:19.268 [2024-11-05 03:29:41.993088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.040137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.040207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:19.268 [2024-11-05 03:29:42.040231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.808 ms 00:19:19.268 [2024-11-05 03:29:42.040245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.040451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.040470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:19.268 [2024-11-05 03:29:42.040492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:19.268 [2024-11-05 03:29:42.040505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.102945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.103005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:19.268 [2024-11-05 03:29:42.103029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.489 ms 00:19:19.268 [2024-11-05 03:29:42.103042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.103092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.103110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:19.268 [2024-11-05 03:29:42.103128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:19.268 [2024-11-05 03:29:42.103141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.104018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.104038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:19.268 [2024-11-05 03:29:42.104055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:19:19.268 [2024-11-05 03:29:42.104068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.104196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.104213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:19.268 [2024-11-05 03:29:42.104235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:19.268 [2024-11-05 03:29:42.104248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.128778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.128824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:19.268 [2024-11-05 03:29:42.128846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.543 ms 00:19:19.268 [2024-11-05 03:29:42.128859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.142771] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:19.268 [2024-11-05 03:29:42.151944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.151989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:19.268 [2024-11-05 03:29:42.152006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.017 ms 00:19:19.268 [2024-11-05 03:29:42.152023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.250819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.250901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:19.268 [2024-11-05 03:29:42.250921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.918 ms 00:19:19.268 [2024-11-05 03:29:42.250939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.251157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.251183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:19.268 [2024-11-05 03:29:42.251198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:19:19.268 [2024-11-05 03:29:42.251215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.289128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.289191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:19.268 [2024-11-05 03:29:42.289210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.903 ms 00:19:19.268 [2024-11-05 03:29:42.289228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.325220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.325273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:19.268 [2024-11-05 03:29:42.325298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.001 ms 00:19:19.268 [2024-11-05 03:29:42.325315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.326113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.326154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:19.268 [2024-11-05 03:29:42.326168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:19:19.268 [2024-11-05 03:29:42.326185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.432688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.432751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:19.268 [2024-11-05 03:29:42.432769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.613 ms 00:19:19.268 [2024-11-05 03:29:42.432789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.472368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.472429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:19.268 [2024-11-05 03:29:42.472447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.550 ms 00:19:19.268 [2024-11-05 03:29:42.472469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.509250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.509317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:19.268 [2024-11-05 03:29:42.509333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.794 ms 00:19:19.268 [2024-11-05 03:29:42.509350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.546336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.546387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:19.268 [2024-11-05 03:29:42.546404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.000 ms 00:19:19.268 [2024-11-05 03:29:42.546420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.546513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.546537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:19.268 [2024-11-05 03:29:42.546551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:19.268 [2024-11-05 03:29:42.546568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.546695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.268 [2024-11-05 03:29:42.546723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:19.268 [2024-11-05 03:29:42.546753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:19.268 [2024-11-05 03:29:42.546769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.268 [2024-11-05 03:29:42.548213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4056.541 ms, result 0 00:19:19.268 { 00:19:19.268 "name": "ftl0", 00:19:19.268 "uuid": "d0742f54-d63d-4e19-94c7-c40b919960a9" 00:19:19.268 } 00:19:19.268 03:29:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:19.268 03:29:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:19.268 03:29:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:19.268 03:29:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:19.528 [2024-11-05 03:29:42.888089] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:19.528 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:19.528 Zero copy mechanism will not be used. 00:19:19.528 Running I/O for 4 seconds... 00:19:21.420 1755.00 IOPS, 116.54 MiB/s [2024-11-05T03:29:45.942Z] 1757.50 IOPS, 116.71 MiB/s [2024-11-05T03:29:47.320Z] 1788.33 IOPS, 118.76 MiB/s [2024-11-05T03:29:47.320Z] 1806.75 IOPS, 119.98 MiB/s 00:19:23.736 Latency(us) 00:19:23.736 [2024-11-05T03:29:47.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.736 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:23.736 ftl0 : 4.00 1806.15 119.94 0.00 0.00 579.63 193.29 2079.25 00:19:23.736 [2024-11-05T03:29:47.320Z] =================================================================================================================== 00:19:23.736 [2024-11-05T03:29:47.320Z] Total : 1806.15 119.94 0.00 0.00 579.63 193.29 2079.25 00:19:23.736 [2024-11-05 03:29:46.894298] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:23.736 { 00:19:23.736 "results": [ 00:19:23.736 { 00:19:23.736 "job": "ftl0", 00:19:23.736 "core_mask": "0x1", 00:19:23.736 "workload": "randwrite", 00:19:23.736 "status": "finished", 00:19:23.736 "queue_depth": 1, 00:19:23.736 "io_size": 69632, 00:19:23.736 "runtime": 4.001876, 00:19:23.736 "iops": 1806.1529142832012, 00:19:23.736 "mibps": 119.93984196411883, 00:19:23.736 "io_failed": 0, 00:19:23.736 "io_timeout": 0, 00:19:23.736 "avg_latency_us": 579.6255045639114, 00:19:23.736 "min_latency_us": 193.28514056224898, 00:19:23.736 "max_latency_us": 2079.254618473896 00:19:23.736 } 00:19:23.736 ], 00:19:23.736 "core_count": 1 00:19:23.736 } 00:19:23.736 03:29:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:23.736 [2024-11-05 03:29:47.040803] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:23.736 Running I/O for 4 seconds... 00:19:25.607 10139.00 IOPS, 39.61 MiB/s [2024-11-05T03:29:50.126Z] 10261.50 IOPS, 40.08 MiB/s [2024-11-05T03:29:51.098Z] 10300.00 IOPS, 40.23 MiB/s [2024-11-05T03:29:51.098Z] 10287.00 IOPS, 40.18 MiB/s 00:19:27.514 Latency(us) 00:19:27.514 [2024-11-05T03:29:51.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.514 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:27.514 ftl0 : 4.02 10277.88 40.15 0.00 0.00 12428.33 253.33 24319.38 00:19:27.514 [2024-11-05T03:29:51.098Z] =================================================================================================================== 00:19:27.514 [2024-11-05T03:29:51.098Z] Total : 10277.88 40.15 0.00 0.00 12428.33 0.00 24319.38 00:19:27.514 { 00:19:27.514 "results": [ 00:19:27.514 { 00:19:27.514 "job": "ftl0", 00:19:27.514 "core_mask": "0x1", 00:19:27.514 "workload": "randwrite", 00:19:27.514 "status": "finished", 00:19:27.514 "queue_depth": 128, 00:19:27.514 "io_size": 4096, 00:19:27.514 "runtime": 4.015711, 00:19:27.514 "iops": 10277.881052695277, 00:19:27.514 "mibps": 40.147972862090924, 00:19:27.514 "io_failed": 0, 00:19:27.514 "io_timeout": 0, 00:19:27.514 "avg_latency_us": 12428.334327808654, 00:19:27.514 "min_latency_us": 253.3269076305221, 00:19:27.514 "max_latency_us": 24319.38313253012 00:19:27.514 } 00:19:27.514 ], 00:19:27.514 "core_count": 1 00:19:27.514 } 00:19:27.514 [2024-11-05 03:29:51.061615] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:27.514 03:29:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:27.773 [2024-11-05 03:29:51.187807] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:27.773 Running I/O for 4 seconds... 00:19:29.647 7331.00 IOPS, 28.64 MiB/s [2024-11-05T03:29:54.608Z] 7282.00 IOPS, 28.45 MiB/s [2024-11-05T03:29:55.544Z] 7522.33 IOPS, 29.38 MiB/s [2024-11-05T03:29:55.544Z] 7430.75 IOPS, 29.03 MiB/s 00:19:31.960 Latency(us) 00:19:31.960 [2024-11-05T03:29:55.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.960 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:31.960 Verification LBA range: start 0x0 length 0x1400000 00:19:31.960 ftl0 : 4.01 7443.34 29.08 0.00 0.00 17147.09 294.45 22845.48 00:19:31.960 [2024-11-05T03:29:55.544Z] =================================================================================================================== 00:19:31.960 [2024-11-05T03:29:55.544Z] Total : 7443.34 29.08 0.00 0.00 17147.09 0.00 22845.48 00:19:31.960 { 00:19:31.960 "results": [ 00:19:31.960 { 00:19:31.960 "job": "ftl0", 00:19:31.960 "core_mask": "0x1", 00:19:31.960 "workload": "verify", 00:19:31.960 "status": "finished", 00:19:31.960 "verify_range": { 00:19:31.960 "start": 0, 00:19:31.960 "length": 20971520 00:19:31.960 }, 00:19:31.960 "queue_depth": 128, 00:19:31.960 "io_size": 4096, 00:19:31.960 "runtime": 4.010299, 00:19:31.960 "iops": 7443.335272507113, 00:19:31.960 "mibps": 29.07552840823091, 00:19:31.960 "io_failed": 0, 00:19:31.960 "io_timeout": 0, 00:19:31.960 "avg_latency_us": 17147.0891011954, 00:19:31.960 "min_latency_us": 294.45140562249, 00:19:31.960 "max_latency_us": 22845.481124497994 00:19:31.960 } 00:19:31.960 ], 00:19:31.960 "core_count": 1 00:19:31.960 } 00:19:31.960 [2024-11-05 03:29:55.213470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:31.960 03:29:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:31.960 [2024-11-05 03:29:55.424276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.960 [2024-11-05 03:29:55.424340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:31.960 [2024-11-05 03:29:55.424361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:31.960 [2024-11-05 03:29:55.424375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.960 [2024-11-05 03:29:55.424401] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:31.960 [2024-11-05 03:29:55.428389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.960 [2024-11-05 03:29:55.428419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:31.960 [2024-11-05 03:29:55.428434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.972 ms 00:19:31.960 [2024-11-05 03:29:55.428445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.960 [2024-11-05 03:29:55.430211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.960 [2024-11-05 03:29:55.430250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:31.960 [2024-11-05 03:29:55.430266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:19:31.960 [2024-11-05 03:29:55.430277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.219 [2024-11-05 03:29:55.630669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.219 [2024-11-05 03:29:55.630743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:32.219 [2024-11-05 03:29:55.630768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 200.670 ms 00:19:32.219 [2024-11-05 03:29:55.630781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.219 [2024-11-05 03:29:55.635883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.219 [2024-11-05 03:29:55.635918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:32.219 [2024-11-05 03:29:55.635934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.064 ms 00:19:32.219 [2024-11-05 03:29:55.635945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.219 [2024-11-05 03:29:55.673530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.219 [2024-11-05 03:29:55.673579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:32.219 [2024-11-05 03:29:55.673599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.567 ms 00:19:32.219 [2024-11-05 03:29:55.673609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.219 [2024-11-05 03:29:55.696269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.219 [2024-11-05 03:29:55.696315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:32.219 [2024-11-05 03:29:55.696352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.647 ms 00:19:32.219 [2024-11-05 03:29:55.696362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.219 [2024-11-05 03:29:55.696521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.219 [2024-11-05 03:29:55.696535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:32.219 [2024-11-05 03:29:55.696552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:19:32.219 [2024-11-05 03:29:55.696562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.219 [2024-11-05 03:29:55.732787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.219 [2024-11-05 03:29:55.732823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:32.219 [2024-11-05 03:29:55.732839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.262 ms 00:19:32.219 [2024-11-05 03:29:55.732850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.219 [2024-11-05 03:29:55.768621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.219 [2024-11-05 03:29:55.768660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:32.219 [2024-11-05 03:29:55.768678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.789 ms 00:19:32.219 [2024-11-05 03:29:55.768688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.480 [2024-11-05 03:29:55.804189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.480 [2024-11-05 03:29:55.804227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:32.480 [2024-11-05 03:29:55.804243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.515 ms 00:19:32.480 [2024-11-05 03:29:55.804254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.480 [2024-11-05 03:29:55.840205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.480 [2024-11-05 03:29:55.840241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:32.480 [2024-11-05 03:29:55.840261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.904 ms 00:19:32.480 [2024-11-05 03:29:55.840271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.480 [2024-11-05 03:29:55.840339] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:32.480 [2024-11-05 03:29:55.840357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.840996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:32.480 [2024-11-05 03:29:55.841298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:32.481 [2024-11-05 03:29:55.841650] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:32.481 [2024-11-05 03:29:55.841663] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d0742f54-d63d-4e19-94c7-c40b919960a9 00:19:32.481 [2024-11-05 03:29:55.841675] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:32.481 [2024-11-05 03:29:55.841688] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:32.481 [2024-11-05 03:29:55.841701] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:32.481 [2024-11-05 03:29:55.841714] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:32.481 [2024-11-05 03:29:55.841724] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:32.481 [2024-11-05 03:29:55.841737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:32.481 [2024-11-05 03:29:55.841747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:32.481 [2024-11-05 03:29:55.841761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:32.481 [2024-11-05 03:29:55.841770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:32.481 [2024-11-05 03:29:55.841783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.481 [2024-11-05 03:29:55.841793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:32.481 [2024-11-05 03:29:55.841816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:19:32.481 [2024-11-05 03:29:55.841826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.481 [2024-11-05 03:29:55.862095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.481 [2024-11-05 03:29:55.862136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:32.481 [2024-11-05 03:29:55.862153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.248 ms 00:19:32.481 [2024-11-05 03:29:55.862163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.481 [2024-11-05 03:29:55.862759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.481 [2024-11-05 03:29:55.862773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:32.481 [2024-11-05 03:29:55.862786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:19:32.481 [2024-11-05 03:29:55.862797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.481 [2024-11-05 03:29:55.917427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.481 [2024-11-05 03:29:55.917475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.481 [2024-11-05 03:29:55.917495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.481 [2024-11-05 03:29:55.917506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.481 [2024-11-05 03:29:55.917572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.481 [2024-11-05 03:29:55.917583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.481 [2024-11-05 03:29:55.917596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.481 [2024-11-05 03:29:55.917606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.481 [2024-11-05 03:29:55.917704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.481 [2024-11-05 03:29:55.917720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.481 [2024-11-05 03:29:55.917733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.481 [2024-11-05 03:29:55.917743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.481 [2024-11-05 03:29:55.917764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.481 [2024-11-05 03:29:55.917774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.481 [2024-11-05 03:29:55.917786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.481 [2024-11-05 03:29:55.917796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.481 [2024-11-05 03:29:56.041809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.481 [2024-11-05 03:29:56.042077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.481 [2024-11-05 03:29:56.042109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.481 [2024-11-05 03:29:56.042121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.741 [2024-11-05 03:29:56.140838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.741 [2024-11-05 03:29:56.140891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.741 [2024-11-05 03:29:56.140909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.741 [2024-11-05 03:29:56.140936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.741 [2024-11-05 03:29:56.141060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.741 [2024-11-05 03:29:56.141073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:32.741 [2024-11-05 03:29:56.141090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.741 [2024-11-05 03:29:56.141101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.741 [2024-11-05 03:29:56.141150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.741 [2024-11-05 03:29:56.141163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:32.741 [2024-11-05 03:29:56.141176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.741 [2024-11-05 03:29:56.141187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.741 [2024-11-05 03:29:56.141334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.741 [2024-11-05 03:29:56.141350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:32.741 [2024-11-05 03:29:56.141371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.741 [2024-11-05 03:29:56.141381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.741 [2024-11-05 03:29:56.141423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.741 [2024-11-05 03:29:56.141435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:32.741 [2024-11-05 03:29:56.141448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.741 [2024-11-05 03:29:56.141458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.741 [2024-11-05 03:29:56.141500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.741 [2024-11-05 03:29:56.141512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:32.741 [2024-11-05 03:29:56.141524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.741 [2024-11-05 03:29:56.141537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.741 [2024-11-05 03:29:56.141584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.741 [2024-11-05 03:29:56.141607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:32.741 [2024-11-05 03:29:56.141620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.741 [2024-11-05 03:29:56.141630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.741 [2024-11-05 03:29:56.141763] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 718.607 ms, result 0 00:19:32.741 true 00:19:32.741 03:29:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75103 00:19:32.741 03:29:56 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 75103 ']' 00:19:32.741 03:29:56 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 75103 00:19:32.741 03:29:56 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:19:32.741 03:29:56 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:32.741 03:29:56 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75103 00:19:32.741 killing process with pid 75103 00:19:32.741 Received shutdown signal, test time was about 4.000000 seconds 00:19:32.741 00:19:32.741 Latency(us) 00:19:32.741 [2024-11-05T03:29:56.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.741 [2024-11-05T03:29:56.325Z] =================================================================================================================== 00:19:32.741 [2024-11-05T03:29:56.325Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.741 03:29:56 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:32.741 03:29:56 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:32.741 03:29:56 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75103' 00:19:32.741 03:29:56 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 75103 00:19:32.741 03:29:56 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 75103 00:19:36.933 Remove shared memory files 00:19:36.933 03:29:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:36.933 03:29:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:36.933 03:29:59 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:36.933 03:29:59 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:36.934 03:29:59 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:36.934 03:29:59 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:36.934 03:29:59 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:36.934 03:29:59 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:36.934 ************************************ 00:19:36.934 END TEST ftl_bdevperf 00:19:36.934 ************************************ 00:19:36.934 00:19:36.934 real 0m25.414s 00:19:36.934 user 0m27.861s 00:19:36.934 sys 0m1.347s 00:19:36.934 03:29:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:36.934 03:29:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:36.934 03:29:59 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:36.934 03:29:59 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:36.934 03:29:59 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:36.934 03:29:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:36.934 ************************************ 00:19:36.934 START TEST ftl_trim 00:19:36.934 ************************************ 00:19:36.934 03:29:59 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:36.934 * Looking for test storage... 00:19:36.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:36.934 03:29:59 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:36.934 03:29:59 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:19:36.934 03:29:59 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:36.934 03:30:00 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:36.934 03:30:00 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:36.934 03:30:00 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:36.934 03:30:00 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:36.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.934 --rc genhtml_branch_coverage=1 00:19:36.934 --rc genhtml_function_coverage=1 00:19:36.934 --rc genhtml_legend=1 00:19:36.934 --rc geninfo_all_blocks=1 00:19:36.934 --rc geninfo_unexecuted_blocks=1 00:19:36.934 00:19:36.934 ' 00:19:36.934 03:30:00 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:36.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.934 --rc genhtml_branch_coverage=1 00:19:36.934 --rc genhtml_function_coverage=1 00:19:36.934 --rc genhtml_legend=1 00:19:36.934 --rc geninfo_all_blocks=1 00:19:36.934 --rc geninfo_unexecuted_blocks=1 00:19:36.934 00:19:36.934 ' 00:19:36.934 03:30:00 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:36.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.934 --rc genhtml_branch_coverage=1 00:19:36.934 --rc genhtml_function_coverage=1 00:19:36.934 --rc genhtml_legend=1 00:19:36.934 --rc geninfo_all_blocks=1 00:19:36.934 --rc geninfo_unexecuted_blocks=1 00:19:36.934 00:19:36.934 ' 00:19:36.934 03:30:00 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:36.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.934 --rc genhtml_branch_coverage=1 00:19:36.934 --rc genhtml_function_coverage=1 00:19:36.934 --rc genhtml_legend=1 00:19:36.934 --rc geninfo_all_blocks=1 00:19:36.934 --rc geninfo_unexecuted_blocks=1 00:19:36.934 00:19:36.934 ' 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75463 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:36.934 03:30:00 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75463 00:19:36.934 03:30:00 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75463 ']' 00:19:36.934 03:30:00 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.934 03:30:00 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:36.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.934 03:30:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.934 03:30:00 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:36.934 03:30:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:36.934 [2024-11-05 03:30:00.176353] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:19:36.934 [2024-11-05 03:30:00.176475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75463 ] 00:19:36.934 [2024-11-05 03:30:00.360635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:36.934 [2024-11-05 03:30:00.479487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.934 [2024-11-05 03:30:00.479628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.934 [2024-11-05 03:30:00.479663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.872 03:30:01 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:37.872 03:30:01 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:19:37.872 03:30:01 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:37.872 03:30:01 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:37.872 03:30:01 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:37.872 03:30:01 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:37.872 03:30:01 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:37.872 03:30:01 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:38.131 03:30:01 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:38.131 03:30:01 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:38.131 03:30:01 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:38.131 03:30:01 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:19:38.131 03:30:01 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:38.131 03:30:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:19:38.131 03:30:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:19:38.131 03:30:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:38.390 03:30:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:38.390 { 00:19:38.390 "name": "nvme0n1", 00:19:38.390 "aliases": [ 00:19:38.390 "f620996f-4eb0-4f3c-8bcf-5d2beb29059d" 00:19:38.390 ], 00:19:38.390 "product_name": "NVMe disk", 00:19:38.390 "block_size": 4096, 00:19:38.390 "num_blocks": 1310720, 00:19:38.390 "uuid": "f620996f-4eb0-4f3c-8bcf-5d2beb29059d", 00:19:38.390 "numa_id": -1, 00:19:38.390 "assigned_rate_limits": { 00:19:38.390 "rw_ios_per_sec": 0, 00:19:38.390 "rw_mbytes_per_sec": 0, 00:19:38.390 "r_mbytes_per_sec": 0, 00:19:38.390 "w_mbytes_per_sec": 0 00:19:38.390 }, 00:19:38.390 "claimed": true, 00:19:38.390 "claim_type": "read_many_write_one", 00:19:38.390 "zoned": false, 00:19:38.390 "supported_io_types": { 00:19:38.390 "read": true, 00:19:38.390 "write": true, 00:19:38.390 "unmap": true, 00:19:38.390 "flush": true, 00:19:38.390 "reset": true, 00:19:38.390 "nvme_admin": true, 00:19:38.390 "nvme_io": true, 00:19:38.390 "nvme_io_md": false, 00:19:38.390 "write_zeroes": true, 00:19:38.390 "zcopy": false, 00:19:38.390 "get_zone_info": false, 00:19:38.390 "zone_management": false, 00:19:38.390 "zone_append": false, 00:19:38.390 "compare": true, 00:19:38.390 "compare_and_write": false, 00:19:38.390 "abort": true, 00:19:38.390 "seek_hole": false, 00:19:38.390 "seek_data": false, 00:19:38.390 "copy": true, 00:19:38.390 "nvme_iov_md": false 00:19:38.390 }, 00:19:38.390 "driver_specific": { 00:19:38.390 "nvme": [ 00:19:38.390 { 00:19:38.390 "pci_address": "0000:00:11.0", 00:19:38.390 "trid": { 00:19:38.390 "trtype": "PCIe", 00:19:38.390 "traddr": "0000:00:11.0" 00:19:38.390 }, 00:19:38.390 "ctrlr_data": { 00:19:38.390 "cntlid": 0, 00:19:38.390 "vendor_id": "0x1b36", 00:19:38.390 "model_number": "QEMU NVMe Ctrl", 00:19:38.390 "serial_number": "12341", 00:19:38.390 "firmware_revision": "8.0.0", 00:19:38.390 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:38.390 "oacs": { 00:19:38.390 "security": 0, 00:19:38.390 "format": 1, 00:19:38.390 "firmware": 0, 00:19:38.390 "ns_manage": 1 00:19:38.390 }, 00:19:38.390 "multi_ctrlr": false, 00:19:38.390 "ana_reporting": false 00:19:38.390 }, 00:19:38.390 "vs": { 00:19:38.390 "nvme_version": "1.4" 00:19:38.390 }, 00:19:38.390 "ns_data": { 00:19:38.390 "id": 1, 00:19:38.390 "can_share": false 00:19:38.390 } 00:19:38.390 } 00:19:38.390 ], 00:19:38.390 "mp_policy": "active_passive" 00:19:38.390 } 00:19:38.390 } 00:19:38.390 ]' 00:19:38.390 03:30:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:38.390 03:30:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:19:38.390 03:30:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:38.650 03:30:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:19:38.650 03:30:01 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:19:38.650 03:30:01 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:19:38.650 03:30:01 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:38.650 03:30:01 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:38.650 03:30:01 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:38.650 03:30:01 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:38.650 03:30:01 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:38.650 03:30:02 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=76b65df6-c806-4ebb-8bd3-e5e7ad1740de 00:19:38.650 03:30:02 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:38.650 03:30:02 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 76b65df6-c806-4ebb-8bd3-e5e7ad1740de 00:19:38.909 03:30:02 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:39.168 03:30:02 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=8dfb7b49-d984-4ae2-832a-45f8fecd9639 00:19:39.168 03:30:02 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8dfb7b49-d984-4ae2-832a-45f8fecd9639 00:19:39.427 03:30:02 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=41b8a97b-ccad-4aab-8993-4de5b4eb7a56 00:19:39.427 03:30:02 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 41b8a97b-ccad-4aab-8993-4de5b4eb7a56 00:19:39.428 03:30:02 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:39.428 03:30:02 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:39.428 03:30:02 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=41b8a97b-ccad-4aab-8993-4de5b4eb7a56 00:19:39.428 03:30:02 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:39.428 03:30:02 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 41b8a97b-ccad-4aab-8993-4de5b4eb7a56 00:19:39.428 03:30:02 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=41b8a97b-ccad-4aab-8993-4de5b4eb7a56 00:19:39.428 03:30:02 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:39.428 03:30:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:19:39.428 03:30:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:19:39.428 03:30:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41b8a97b-ccad-4aab-8993-4de5b4eb7a56 00:19:39.686 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:39.686 { 00:19:39.686 "name": "41b8a97b-ccad-4aab-8993-4de5b4eb7a56", 00:19:39.686 "aliases": [ 00:19:39.686 "lvs/nvme0n1p0" 00:19:39.686 ], 00:19:39.686 "product_name": "Logical Volume", 00:19:39.686 "block_size": 4096, 00:19:39.686 "num_blocks": 26476544, 00:19:39.686 "uuid": "41b8a97b-ccad-4aab-8993-4de5b4eb7a56", 00:19:39.686 "assigned_rate_limits": { 00:19:39.686 "rw_ios_per_sec": 0, 00:19:39.686 "rw_mbytes_per_sec": 0, 00:19:39.686 "r_mbytes_per_sec": 0, 00:19:39.686 "w_mbytes_per_sec": 0 00:19:39.686 }, 00:19:39.686 "claimed": false, 00:19:39.686 "zoned": false, 00:19:39.686 "supported_io_types": { 00:19:39.686 "read": true, 00:19:39.686 "write": true, 00:19:39.686 "unmap": true, 00:19:39.686 "flush": false, 00:19:39.686 "reset": true, 00:19:39.686 "nvme_admin": false, 00:19:39.687 "nvme_io": false, 00:19:39.687 "nvme_io_md": false, 00:19:39.687 "write_zeroes": true, 00:19:39.687 "zcopy": false, 00:19:39.687 "get_zone_info": false, 00:19:39.687 "zone_management": false, 00:19:39.687 "zone_append": false, 00:19:39.687 "compare": false, 00:19:39.687 "compare_and_write": false, 00:19:39.687 "abort": false, 00:19:39.687 "seek_hole": true, 00:19:39.687 "seek_data": true, 00:19:39.687 "copy": false, 00:19:39.687 "nvme_iov_md": false 00:19:39.687 }, 00:19:39.687 "driver_specific": { 00:19:39.687 "lvol": { 00:19:39.687 "lvol_store_uuid": "8dfb7b49-d984-4ae2-832a-45f8fecd9639", 00:19:39.687 "base_bdev": "nvme0n1", 00:19:39.687 "thin_provision": true, 00:19:39.687 "num_allocated_clusters": 0, 00:19:39.687 "snapshot": false, 00:19:39.687 "clone": false, 00:19:39.687 "esnap_clone": false 00:19:39.687 } 00:19:39.687 } 00:19:39.687 } 00:19:39.687 ]' 00:19:39.687 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:39.687 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:19:39.687 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:39.687 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:39.687 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:39.687 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:19:39.687 03:30:03 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:39.687 03:30:03 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:39.687 03:30:03 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:39.945 03:30:03 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:39.945 03:30:03 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:39.945 03:30:03 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 41b8a97b-ccad-4aab-8993-4de5b4eb7a56 00:19:39.945 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=41b8a97b-ccad-4aab-8993-4de5b4eb7a56 00:19:39.945 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:39.945 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:19:39.946 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:19:39.946 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41b8a97b-ccad-4aab-8993-4de5b4eb7a56 00:19:40.205 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:40.205 { 00:19:40.205 "name": "41b8a97b-ccad-4aab-8993-4de5b4eb7a56", 00:19:40.205 "aliases": [ 00:19:40.205 "lvs/nvme0n1p0" 00:19:40.205 ], 00:19:40.205 "product_name": "Logical Volume", 00:19:40.205 "block_size": 4096, 00:19:40.205 "num_blocks": 26476544, 00:19:40.205 "uuid": "41b8a97b-ccad-4aab-8993-4de5b4eb7a56", 00:19:40.205 "assigned_rate_limits": { 00:19:40.205 "rw_ios_per_sec": 0, 00:19:40.205 "rw_mbytes_per_sec": 0, 00:19:40.205 "r_mbytes_per_sec": 0, 00:19:40.205 "w_mbytes_per_sec": 0 00:19:40.205 }, 00:19:40.205 "claimed": false, 00:19:40.205 "zoned": false, 00:19:40.205 "supported_io_types": { 00:19:40.205 "read": true, 00:19:40.205 "write": true, 00:19:40.205 "unmap": true, 00:19:40.205 "flush": false, 00:19:40.205 "reset": true, 00:19:40.205 "nvme_admin": false, 00:19:40.205 "nvme_io": false, 00:19:40.205 "nvme_io_md": false, 00:19:40.205 "write_zeroes": true, 00:19:40.205 "zcopy": false, 00:19:40.205 "get_zone_info": false, 00:19:40.205 "zone_management": false, 00:19:40.205 "zone_append": false, 00:19:40.205 "compare": false, 00:19:40.205 "compare_and_write": false, 00:19:40.205 "abort": false, 00:19:40.205 "seek_hole": true, 00:19:40.205 "seek_data": true, 00:19:40.205 "copy": false, 00:19:40.205 "nvme_iov_md": false 00:19:40.205 }, 00:19:40.205 "driver_specific": { 00:19:40.205 "lvol": { 00:19:40.205 "lvol_store_uuid": "8dfb7b49-d984-4ae2-832a-45f8fecd9639", 00:19:40.205 "base_bdev": "nvme0n1", 00:19:40.205 "thin_provision": true, 00:19:40.205 "num_allocated_clusters": 0, 00:19:40.205 "snapshot": false, 00:19:40.205 "clone": false, 00:19:40.205 "esnap_clone": false 00:19:40.205 } 00:19:40.205 } 00:19:40.205 } 00:19:40.205 ]' 00:19:40.205 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:40.205 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:19:40.205 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:40.464 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:40.464 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:40.464 03:30:03 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:19:40.464 03:30:03 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:40.464 03:30:03 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:40.464 03:30:04 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:40.464 03:30:04 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:40.464 03:30:04 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 41b8a97b-ccad-4aab-8993-4de5b4eb7a56 00:19:40.464 03:30:04 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=41b8a97b-ccad-4aab-8993-4de5b4eb7a56 00:19:40.464 03:30:04 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:40.464 03:30:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:19:40.464 03:30:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:19:40.464 03:30:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41b8a97b-ccad-4aab-8993-4de5b4eb7a56 00:19:40.722 03:30:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:40.722 { 00:19:40.722 "name": "41b8a97b-ccad-4aab-8993-4de5b4eb7a56", 00:19:40.722 "aliases": [ 00:19:40.722 "lvs/nvme0n1p0" 00:19:40.722 ], 00:19:40.722 "product_name": "Logical Volume", 00:19:40.722 "block_size": 4096, 00:19:40.722 "num_blocks": 26476544, 00:19:40.722 "uuid": "41b8a97b-ccad-4aab-8993-4de5b4eb7a56", 00:19:40.722 "assigned_rate_limits": { 00:19:40.722 "rw_ios_per_sec": 0, 00:19:40.722 "rw_mbytes_per_sec": 0, 00:19:40.722 "r_mbytes_per_sec": 0, 00:19:40.722 "w_mbytes_per_sec": 0 00:19:40.722 }, 00:19:40.722 "claimed": false, 00:19:40.722 "zoned": false, 00:19:40.722 "supported_io_types": { 00:19:40.722 "read": true, 00:19:40.722 "write": true, 00:19:40.722 "unmap": true, 00:19:40.722 "flush": false, 00:19:40.722 "reset": true, 00:19:40.722 "nvme_admin": false, 00:19:40.722 "nvme_io": false, 00:19:40.722 "nvme_io_md": false, 00:19:40.722 "write_zeroes": true, 00:19:40.722 "zcopy": false, 00:19:40.722 "get_zone_info": false, 00:19:40.722 "zone_management": false, 00:19:40.722 "zone_append": false, 00:19:40.722 "compare": false, 00:19:40.722 "compare_and_write": false, 00:19:40.722 "abort": false, 00:19:40.722 "seek_hole": true, 00:19:40.722 "seek_data": true, 00:19:40.722 "copy": false, 00:19:40.722 "nvme_iov_md": false 00:19:40.722 }, 00:19:40.722 "driver_specific": { 00:19:40.722 "lvol": { 00:19:40.722 "lvol_store_uuid": "8dfb7b49-d984-4ae2-832a-45f8fecd9639", 00:19:40.722 "base_bdev": "nvme0n1", 00:19:40.722 "thin_provision": true, 00:19:40.722 "num_allocated_clusters": 0, 00:19:40.723 "snapshot": false, 00:19:40.723 "clone": false, 00:19:40.723 "esnap_clone": false 00:19:40.723 } 00:19:40.723 } 00:19:40.723 } 00:19:40.723 ]' 00:19:40.723 03:30:04 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:40.723 03:30:04 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:19:40.723 03:30:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:40.723 03:30:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:40.723 03:30:04 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:40.723 03:30:04 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:19:40.723 03:30:04 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:40.723 03:30:04 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 41b8a97b-ccad-4aab-8993-4de5b4eb7a56 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:40.981 [2024-11-05 03:30:04.464380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.981 [2024-11-05 03:30:04.464430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:40.981 [2024-11-05 03:30:04.464450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:40.981 [2024-11-05 03:30:04.464461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.981 [2024-11-05 03:30:04.467748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.981 [2024-11-05 03:30:04.467894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.981 [2024-11-05 03:30:04.467920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.262 ms 00:19:40.981 [2024-11-05 03:30:04.467931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.981 [2024-11-05 03:30:04.468064] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:40.981 [2024-11-05 03:30:04.469027] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:40.981 [2024-11-05 03:30:04.469056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.981 [2024-11-05 03:30:04.469068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.981 [2024-11-05 03:30:04.469081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:19:40.981 [2024-11-05 03:30:04.469092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.981 [2024-11-05 03:30:04.469200] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 24647937-08ab-4dc6-a95c-fd93b438c7ce 00:19:40.981 [2024-11-05 03:30:04.470607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.981 [2024-11-05 03:30:04.470640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:40.981 [2024-11-05 03:30:04.470652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:40.981 [2024-11-05 03:30:04.470665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.981 [2024-11-05 03:30:04.478186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.981 [2024-11-05 03:30:04.478329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.981 [2024-11-05 03:30:04.478448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.449 ms 00:19:40.981 [2024-11-05 03:30:04.478494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.981 [2024-11-05 03:30:04.478695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.981 [2024-11-05 03:30:04.478912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.981 [2024-11-05 03:30:04.478951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:40.981 [2024-11-05 03:30:04.478993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.981 [2024-11-05 03:30:04.479060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.981 [2024-11-05 03:30:04.479103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:40.981 [2024-11-05 03:30:04.479139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:40.981 [2024-11-05 03:30:04.479174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.981 [2024-11-05 03:30:04.479352] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:40.981 [2024-11-05 03:30:04.484495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.981 [2024-11-05 03:30:04.484522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.981 [2024-11-05 03:30:04.484542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.156 ms 00:19:40.981 [2024-11-05 03:30:04.484553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.981 [2024-11-05 03:30:04.484616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.981 [2024-11-05 03:30:04.484628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:40.981 [2024-11-05 03:30:04.484641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:40.981 [2024-11-05 03:30:04.484666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.981 [2024-11-05 03:30:04.484699] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:40.981 [2024-11-05 03:30:04.484833] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:40.981 [2024-11-05 03:30:04.484857] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:40.981 [2024-11-05 03:30:04.484871] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:40.982 [2024-11-05 03:30:04.484886] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:40.982 [2024-11-05 03:30:04.484899] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:40.982 [2024-11-05 03:30:04.484913] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:40.982 [2024-11-05 03:30:04.484924] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:40.982 [2024-11-05 03:30:04.484936] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:40.982 [2024-11-05 03:30:04.484949] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:40.982 [2024-11-05 03:30:04.484961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.982 [2024-11-05 03:30:04.484972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:40.982 [2024-11-05 03:30:04.484986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:19:40.982 [2024-11-05 03:30:04.484997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.982 [2024-11-05 03:30:04.485082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.982 [2024-11-05 03:30:04.485093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:40.982 [2024-11-05 03:30:04.485106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:40.982 [2024-11-05 03:30:04.485116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.982 [2024-11-05 03:30:04.485240] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:40.982 [2024-11-05 03:30:04.485252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:40.982 [2024-11-05 03:30:04.485266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.982 [2024-11-05 03:30:04.485277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.982 [2024-11-05 03:30:04.485301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:40.982 [2024-11-05 03:30:04.485310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:40.982 [2024-11-05 03:30:04.485323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:40.982 [2024-11-05 03:30:04.485333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:40.982 [2024-11-05 03:30:04.485344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:40.982 [2024-11-05 03:30:04.485354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.982 [2024-11-05 03:30:04.485366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:40.982 [2024-11-05 03:30:04.485376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:40.982 [2024-11-05 03:30:04.485388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.982 [2024-11-05 03:30:04.485398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:40.982 [2024-11-05 03:30:04.485429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:40.982 [2024-11-05 03:30:04.485438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.982 [2024-11-05 03:30:04.485453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:40.982 [2024-11-05 03:30:04.485462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:40.982 [2024-11-05 03:30:04.485475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.982 [2024-11-05 03:30:04.485485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:40.982 [2024-11-05 03:30:04.485496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:40.982 [2024-11-05 03:30:04.485505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.982 [2024-11-05 03:30:04.485517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:40.982 [2024-11-05 03:30:04.485526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:40.982 [2024-11-05 03:30:04.485538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.982 [2024-11-05 03:30:04.485547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:40.982 [2024-11-05 03:30:04.485559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:40.982 [2024-11-05 03:30:04.485573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.982 [2024-11-05 03:30:04.485585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:40.982 [2024-11-05 03:30:04.485594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:40.982 [2024-11-05 03:30:04.485606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.982 [2024-11-05 03:30:04.485615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:40.982 [2024-11-05 03:30:04.485629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:40.982 [2024-11-05 03:30:04.485638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.982 [2024-11-05 03:30:04.485649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:40.982 [2024-11-05 03:30:04.485659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:40.982 [2024-11-05 03:30:04.485670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.982 [2024-11-05 03:30:04.485679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:40.982 [2024-11-05 03:30:04.485691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:40.982 [2024-11-05 03:30:04.485700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.982 [2024-11-05 03:30:04.485711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:40.982 [2024-11-05 03:30:04.485720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:40.982 [2024-11-05 03:30:04.485731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.982 [2024-11-05 03:30:04.485740] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:40.982 [2024-11-05 03:30:04.485753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:40.982 [2024-11-05 03:30:04.485763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.982 [2024-11-05 03:30:04.485777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.982 [2024-11-05 03:30:04.485787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:40.982 [2024-11-05 03:30:04.485801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:40.982 [2024-11-05 03:30:04.485810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:40.982 [2024-11-05 03:30:04.485822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:40.982 [2024-11-05 03:30:04.485831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:40.982 [2024-11-05 03:30:04.485844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:40.982 [2024-11-05 03:30:04.485858] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:40.982 [2024-11-05 03:30:04.485873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.982 [2024-11-05 03:30:04.485885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:40.982 [2024-11-05 03:30:04.485898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:40.982 [2024-11-05 03:30:04.485908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:40.982 [2024-11-05 03:30:04.485921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:40.982 [2024-11-05 03:30:04.485932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:40.982 [2024-11-05 03:30:04.485944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:40.982 [2024-11-05 03:30:04.485955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:40.982 [2024-11-05 03:30:04.485967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:40.982 [2024-11-05 03:30:04.485978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:40.982 [2024-11-05 03:30:04.485993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:40.982 [2024-11-05 03:30:04.486003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:40.982 [2024-11-05 03:30:04.486016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:40.982 [2024-11-05 03:30:04.486027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:40.982 [2024-11-05 03:30:04.486039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:40.982 [2024-11-05 03:30:04.486049] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:40.982 [2024-11-05 03:30:04.486071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.982 [2024-11-05 03:30:04.486083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:40.982 [2024-11-05 03:30:04.486095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:40.982 [2024-11-05 03:30:04.486106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:40.982 [2024-11-05 03:30:04.486118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:40.982 [2024-11-05 03:30:04.486129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.982 [2024-11-05 03:30:04.486142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:40.982 [2024-11-05 03:30:04.486153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:19:40.982 [2024-11-05 03:30:04.486165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.982 [2024-11-05 03:30:04.486242] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:40.982 [2024-11-05 03:30:04.486260] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:47.567 [2024-11-05 03:30:10.639611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.567 [2024-11-05 03:30:10.639903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:47.567 [2024-11-05 03:30:10.639930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6163.362 ms 00:19:47.567 [2024-11-05 03:30:10.639945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.567 [2024-11-05 03:30:10.683046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.567 [2024-11-05 03:30:10.683103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.567 [2024-11-05 03:30:10.683121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.773 ms 00:19:47.567 [2024-11-05 03:30:10.683136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.567 [2024-11-05 03:30:10.683311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.567 [2024-11-05 03:30:10.683340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:47.567 [2024-11-05 03:30:10.683353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:47.567 [2024-11-05 03:30:10.683369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.567 [2024-11-05 03:30:10.748156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.567 [2024-11-05 03:30:10.748221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.567 [2024-11-05 03:30:10.748242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.835 ms 00:19:47.567 [2024-11-05 03:30:10.748272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.567 [2024-11-05 03:30:10.748428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.567 [2024-11-05 03:30:10.748448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.567 [2024-11-05 03:30:10.748462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:47.567 [2024-11-05 03:30:10.748478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.567 [2024-11-05 03:30:10.748943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.567 [2024-11-05 03:30:10.748971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.567 [2024-11-05 03:30:10.748984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:19:47.567 [2024-11-05 03:30:10.749000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.567 [2024-11-05 03:30:10.749142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.567 [2024-11-05 03:30:10.749172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.567 [2024-11-05 03:30:10.749186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:47.567 [2024-11-05 03:30:10.749204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.567 [2024-11-05 03:30:10.774286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.567 [2024-11-05 03:30:10.774530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.567 [2024-11-05 03:30:10.774648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.066 ms 00:19:47.567 [2024-11-05 03:30:10.774714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.567 [2024-11-05 03:30:10.789126] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:47.567 [2024-11-05 03:30:10.807001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.567 [2024-11-05 03:30:10.807263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:47.567 [2024-11-05 03:30:10.807380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.075 ms 00:19:47.567 [2024-11-05 03:30:10.807422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.567 [2024-11-05 03:30:10.985405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.567 [2024-11-05 03:30:10.985641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:47.567 [2024-11-05 03:30:10.985755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 178.066 ms 00:19:47.567 [2024-11-05 03:30:10.985795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.567 [2024-11-05 03:30:10.986067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.567 [2024-11-05 03:30:10.986133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:47.567 [2024-11-05 03:30:10.986239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:19:47.567 [2024-11-05 03:30:10.986307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.567 [2024-11-05 03:30:11.026833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.567 [2024-11-05 03:30:11.027018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:47.567 [2024-11-05 03:30:11.027114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.519 ms 00:19:47.567 [2024-11-05 03:30:11.027153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.567 [2024-11-05 03:30:11.068681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.567 [2024-11-05 03:30:11.068841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:47.567 [2024-11-05 03:30:11.068942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.456 ms 00:19:47.567 [2024-11-05 03:30:11.068975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.567 [2024-11-05 03:30:11.069872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.567 [2024-11-05 03:30:11.069998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:47.567 [2024-11-05 03:30:11.070090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:19:47.567 [2024-11-05 03:30:11.070134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.826 [2024-11-05 03:30:11.210152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.826 [2024-11-05 03:30:11.210393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:47.826 [2024-11-05 03:30:11.210585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 140.167 ms 00:19:47.826 [2024-11-05 03:30:11.210628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.826 [2024-11-05 03:30:11.254422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.826 [2024-11-05 03:30:11.254643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:47.826 [2024-11-05 03:30:11.254735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.678 ms 00:19:47.826 [2024-11-05 03:30:11.254775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.826 [2024-11-05 03:30:11.295974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.826 [2024-11-05 03:30:11.296158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:47.826 [2024-11-05 03:30:11.296243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.133 ms 00:19:47.826 [2024-11-05 03:30:11.296282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.826 [2024-11-05 03:30:11.337670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.826 [2024-11-05 03:30:11.337838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:47.826 [2024-11-05 03:30:11.337917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.335 ms 00:19:47.826 [2024-11-05 03:30:11.337973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.826 [2024-11-05 03:30:11.338100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.826 [2024-11-05 03:30:11.338123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:47.826 [2024-11-05 03:30:11.338143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:47.826 [2024-11-05 03:30:11.338153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.826 [2024-11-05 03:30:11.338258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.826 [2024-11-05 03:30:11.338275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:47.826 [2024-11-05 03:30:11.338314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:47.826 [2024-11-05 03:30:11.338326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.826 [2024-11-05 03:30:11.339341] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:47.826 [2024-11-05 03:30:11.344208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 6885.781 ms, result 0 00:19:47.826 [2024-11-05 03:30:11.345218] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:47.826 { 00:19:47.826 "name": "ftl0", 00:19:47.826 "uuid": "24647937-08ab-4dc6-a95c-fd93b438c7ce" 00:19:47.826 } 00:19:47.826 03:30:11 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:47.826 03:30:11 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:19:47.826 03:30:11 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:47.826 03:30:11 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:19:47.826 03:30:11 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:47.826 03:30:11 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:47.826 03:30:11 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:48.084 03:30:11 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:48.343 [ 00:19:48.343 { 00:19:48.343 "name": "ftl0", 00:19:48.343 "aliases": [ 00:19:48.343 "24647937-08ab-4dc6-a95c-fd93b438c7ce" 00:19:48.343 ], 00:19:48.343 "product_name": "FTL disk", 00:19:48.343 "block_size": 4096, 00:19:48.343 "num_blocks": 23592960, 00:19:48.343 "uuid": "24647937-08ab-4dc6-a95c-fd93b438c7ce", 00:19:48.343 "assigned_rate_limits": { 00:19:48.343 "rw_ios_per_sec": 0, 00:19:48.343 "rw_mbytes_per_sec": 0, 00:19:48.343 "r_mbytes_per_sec": 0, 00:19:48.343 "w_mbytes_per_sec": 0 00:19:48.343 }, 00:19:48.343 "claimed": false, 00:19:48.343 "zoned": false, 00:19:48.343 "supported_io_types": { 00:19:48.343 "read": true, 00:19:48.343 "write": true, 00:19:48.343 "unmap": true, 00:19:48.343 "flush": true, 00:19:48.343 "reset": false, 00:19:48.343 "nvme_admin": false, 00:19:48.343 "nvme_io": false, 00:19:48.343 "nvme_io_md": false, 00:19:48.343 "write_zeroes": true, 00:19:48.343 "zcopy": false, 00:19:48.343 "get_zone_info": false, 00:19:48.343 "zone_management": false, 00:19:48.343 "zone_append": false, 00:19:48.343 "compare": false, 00:19:48.343 "compare_and_write": false, 00:19:48.343 "abort": false, 00:19:48.343 "seek_hole": false, 00:19:48.343 "seek_data": false, 00:19:48.343 "copy": false, 00:19:48.343 "nvme_iov_md": false 00:19:48.343 }, 00:19:48.343 "driver_specific": { 00:19:48.343 "ftl": { 00:19:48.343 "base_bdev": "41b8a97b-ccad-4aab-8993-4de5b4eb7a56", 00:19:48.343 "cache": "nvc0n1p0" 00:19:48.343 } 00:19:48.343 } 00:19:48.343 } 00:19:48.343 ] 00:19:48.343 03:30:11 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:19:48.343 03:30:11 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:48.343 03:30:11 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:48.602 03:30:12 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:48.602 03:30:12 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:48.861 03:30:12 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:48.861 { 00:19:48.861 "name": "ftl0", 00:19:48.861 "aliases": [ 00:19:48.861 "24647937-08ab-4dc6-a95c-fd93b438c7ce" 00:19:48.861 ], 00:19:48.861 "product_name": "FTL disk", 00:19:48.861 "block_size": 4096, 00:19:48.861 "num_blocks": 23592960, 00:19:48.861 "uuid": "24647937-08ab-4dc6-a95c-fd93b438c7ce", 00:19:48.861 "assigned_rate_limits": { 00:19:48.861 "rw_ios_per_sec": 0, 00:19:48.861 "rw_mbytes_per_sec": 0, 00:19:48.861 "r_mbytes_per_sec": 0, 00:19:48.861 "w_mbytes_per_sec": 0 00:19:48.861 }, 00:19:48.861 "claimed": false, 00:19:48.861 "zoned": false, 00:19:48.861 "supported_io_types": { 00:19:48.861 "read": true, 00:19:48.861 "write": true, 00:19:48.861 "unmap": true, 00:19:48.861 "flush": true, 00:19:48.861 "reset": false, 00:19:48.861 "nvme_admin": false, 00:19:48.861 "nvme_io": false, 00:19:48.861 "nvme_io_md": false, 00:19:48.861 "write_zeroes": true, 00:19:48.861 "zcopy": false, 00:19:48.861 "get_zone_info": false, 00:19:48.861 "zone_management": false, 00:19:48.861 "zone_append": false, 00:19:48.861 "compare": false, 00:19:48.861 "compare_and_write": false, 00:19:48.861 "abort": false, 00:19:48.861 "seek_hole": false, 00:19:48.861 "seek_data": false, 00:19:48.861 "copy": false, 00:19:48.861 "nvme_iov_md": false 00:19:48.861 }, 00:19:48.861 "driver_specific": { 00:19:48.861 "ftl": { 00:19:48.861 "base_bdev": "41b8a97b-ccad-4aab-8993-4de5b4eb7a56", 00:19:48.861 "cache": "nvc0n1p0" 00:19:48.861 } 00:19:48.861 } 00:19:48.861 } 00:19:48.861 ]' 00:19:48.861 03:30:12 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:48.861 03:30:12 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:48.861 03:30:12 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:49.120 [2024-11-05 03:30:12.550641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.120 [2024-11-05 03:30:12.550892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:49.120 [2024-11-05 03:30:12.550923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:49.120 [2024-11-05 03:30:12.550940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.120 [2024-11-05 03:30:12.550999] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:49.120 [2024-11-05 03:30:12.555528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.120 [2024-11-05 03:30:12.555557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:49.120 [2024-11-05 03:30:12.555578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.513 ms 00:19:49.120 [2024-11-05 03:30:12.555590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.120 [2024-11-05 03:30:12.556156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.120 [2024-11-05 03:30:12.556174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:49.120 [2024-11-05 03:30:12.556189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:19:49.120 [2024-11-05 03:30:12.556199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.120 [2024-11-05 03:30:12.559493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.120 [2024-11-05 03:30:12.559621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:49.120 [2024-11-05 03:30:12.559721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.260 ms 00:19:49.120 [2024-11-05 03:30:12.559777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.120 [2024-11-05 03:30:12.566154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.120 [2024-11-05 03:30:12.566303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:49.120 [2024-11-05 03:30:12.566358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.278 ms 00:19:49.120 [2024-11-05 03:30:12.566391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.120 [2024-11-05 03:30:12.607233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.120 [2024-11-05 03:30:12.607399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:49.120 [2024-11-05 03:30:12.607509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.762 ms 00:19:49.120 [2024-11-05 03:30:12.607554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.120 [2024-11-05 03:30:12.634710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.120 [2024-11-05 03:30:12.634923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:49.120 [2024-11-05 03:30:12.635030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.064 ms 00:19:49.120 [2024-11-05 03:30:12.635078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.120 [2024-11-05 03:30:12.635424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.120 [2024-11-05 03:30:12.635548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:49.120 [2024-11-05 03:30:12.635634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:19:49.120 [2024-11-05 03:30:12.635677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.120 [2024-11-05 03:30:12.675695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.120 [2024-11-05 03:30:12.675848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:49.120 [2024-11-05 03:30:12.675929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.018 ms 00:19:49.120 [2024-11-05 03:30:12.675966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.381 [2024-11-05 03:30:12.714753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.381 [2024-11-05 03:30:12.714796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:49.381 [2024-11-05 03:30:12.714819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.687 ms 00:19:49.381 [2024-11-05 03:30:12.714830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.381 [2024-11-05 03:30:12.752871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.381 [2024-11-05 03:30:12.752911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:49.381 [2024-11-05 03:30:12.752928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.997 ms 00:19:49.381 [2024-11-05 03:30:12.752939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.381 [2024-11-05 03:30:12.790050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.381 [2024-11-05 03:30:12.790087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:49.381 [2024-11-05 03:30:12.790103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.036 ms 00:19:49.381 [2024-11-05 03:30:12.790113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.381 [2024-11-05 03:30:12.790199] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:49.381 [2024-11-05 03:30:12.790218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:49.381 [2024-11-05 03:30:12.790979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.790990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:49.382 [2024-11-05 03:30:12.791631] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:49.382 [2024-11-05 03:30:12.791647] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24647937-08ab-4dc6-a95c-fd93b438c7ce 00:19:49.382 [2024-11-05 03:30:12.791659] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:49.382 [2024-11-05 03:30:12.791671] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:49.382 [2024-11-05 03:30:12.791682] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:49.382 [2024-11-05 03:30:12.791696] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:49.382 [2024-11-05 03:30:12.791709] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:49.382 [2024-11-05 03:30:12.791723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:49.382 [2024-11-05 03:30:12.791733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:49.382 [2024-11-05 03:30:12.791746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:49.382 [2024-11-05 03:30:12.791756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:49.382 [2024-11-05 03:30:12.791770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.382 [2024-11-05 03:30:12.791780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:49.382 [2024-11-05 03:30:12.791812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:19:49.382 [2024-11-05 03:30:12.791822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.382 [2024-11-05 03:30:12.812845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.382 [2024-11-05 03:30:12.813000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:49.382 [2024-11-05 03:30:12.813033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.018 ms 00:19:49.382 [2024-11-05 03:30:12.813045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.382 [2024-11-05 03:30:12.813644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.382 [2024-11-05 03:30:12.813659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:49.382 [2024-11-05 03:30:12.813673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:19:49.382 [2024-11-05 03:30:12.813683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.382 [2024-11-05 03:30:12.884620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.382 [2024-11-05 03:30:12.884670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.382 [2024-11-05 03:30:12.884687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.382 [2024-11-05 03:30:12.884697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.382 [2024-11-05 03:30:12.884844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.382 [2024-11-05 03:30:12.884857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.382 [2024-11-05 03:30:12.884870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.382 [2024-11-05 03:30:12.884880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.382 [2024-11-05 03:30:12.884955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.382 [2024-11-05 03:30:12.884969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.382 [2024-11-05 03:30:12.884988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.382 [2024-11-05 03:30:12.884998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.382 [2024-11-05 03:30:12.885034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.382 [2024-11-05 03:30:12.885045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.382 [2024-11-05 03:30:12.885058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.382 [2024-11-05 03:30:12.885068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.642 [2024-11-05 03:30:13.018464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.642 [2024-11-05 03:30:13.018681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.642 [2024-11-05 03:30:13.018718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.642 [2024-11-05 03:30:13.018729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.642 [2024-11-05 03:30:13.122102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.642 [2024-11-05 03:30:13.122278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.642 [2024-11-05 03:30:13.122317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.642 [2024-11-05 03:30:13.122328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.642 [2024-11-05 03:30:13.122490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.642 [2024-11-05 03:30:13.122503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.642 [2024-11-05 03:30:13.122539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.642 [2024-11-05 03:30:13.122552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.642 [2024-11-05 03:30:13.122656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.642 [2024-11-05 03:30:13.122667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.642 [2024-11-05 03:30:13.122680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.642 [2024-11-05 03:30:13.122690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.642 [2024-11-05 03:30:13.122844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.642 [2024-11-05 03:30:13.122858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.642 [2024-11-05 03:30:13.122872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.642 [2024-11-05 03:30:13.122882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.642 [2024-11-05 03:30:13.122967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.642 [2024-11-05 03:30:13.122980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:49.642 [2024-11-05 03:30:13.122993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.642 [2024-11-05 03:30:13.123004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.642 [2024-11-05 03:30:13.123109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.642 [2024-11-05 03:30:13.123122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.642 [2024-11-05 03:30:13.123138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.642 [2024-11-05 03:30:13.123149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.642 [2024-11-05 03:30:13.123236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.642 [2024-11-05 03:30:13.123249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.642 [2024-11-05 03:30:13.123263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.642 [2024-11-05 03:30:13.123274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.642 [2024-11-05 03:30:13.123582] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 573.835 ms, result 0 00:19:49.642 true 00:19:49.642 03:30:13 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75463 00:19:49.642 03:30:13 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75463 ']' 00:19:49.642 03:30:13 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75463 00:19:49.642 03:30:13 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:19:49.642 03:30:13 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:49.642 03:30:13 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75463 00:19:49.642 killing process with pid 75463 00:19:49.642 03:30:13 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:49.642 03:30:13 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:49.642 03:30:13 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75463' 00:19:49.642 03:30:13 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75463 00:19:49.642 03:30:13 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75463 00:19:54.940 03:30:18 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:55.873 65536+0 records in 00:19:55.873 65536+0 records out 00:19:55.873 268435456 bytes (268 MB, 256 MiB) copied, 1.01985 s, 263 MB/s 00:19:55.873 03:30:19 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:55.873 [2024-11-05 03:30:19.378801] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:19:55.873 [2024-11-05 03:30:19.378922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75707 ] 00:19:56.131 [2024-11-05 03:30:19.561954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.131 [2024-11-05 03:30:19.683871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.698 [2024-11-05 03:30:20.057581] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:56.698 [2024-11-05 03:30:20.057654] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:56.698 [2024-11-05 03:30:20.221791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.698 [2024-11-05 03:30:20.221848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:56.698 [2024-11-05 03:30:20.221864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:56.698 [2024-11-05 03:30:20.221874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.698 [2024-11-05 03:30:20.225234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.698 [2024-11-05 03:30:20.225425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:56.698 [2024-11-05 03:30:20.225453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.345 ms 00:19:56.698 [2024-11-05 03:30:20.225467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.698 [2024-11-05 03:30:20.225663] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:56.698 [2024-11-05 03:30:20.226790] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:56.698 [2024-11-05 03:30:20.226831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.698 [2024-11-05 03:30:20.226846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:56.698 [2024-11-05 03:30:20.226860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 00:19:56.698 [2024-11-05 03:30:20.226874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.698 [2024-11-05 03:30:20.228639] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:56.698 [2024-11-05 03:30:20.249162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.698 [2024-11-05 03:30:20.249200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:56.698 [2024-11-05 03:30:20.249215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.558 ms 00:19:56.698 [2024-11-05 03:30:20.249226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.698 [2024-11-05 03:30:20.249340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.698 [2024-11-05 03:30:20.249356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:56.698 [2024-11-05 03:30:20.249367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:56.698 [2024-11-05 03:30:20.249377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.698 [2024-11-05 03:30:20.256070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.698 [2024-11-05 03:30:20.256102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:56.698 [2024-11-05 03:30:20.256115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.659 ms 00:19:56.698 [2024-11-05 03:30:20.256125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.698 [2024-11-05 03:30:20.256227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.698 [2024-11-05 03:30:20.256242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:56.698 [2024-11-05 03:30:20.256253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:56.698 [2024-11-05 03:30:20.256264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.698 [2024-11-05 03:30:20.256317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.698 [2024-11-05 03:30:20.256330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:56.698 [2024-11-05 03:30:20.256342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:56.698 [2024-11-05 03:30:20.256352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.698 [2024-11-05 03:30:20.256376] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:56.698 [2024-11-05 03:30:20.261045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.698 [2024-11-05 03:30:20.261078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:56.698 [2024-11-05 03:30:20.261091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.683 ms 00:19:56.698 [2024-11-05 03:30:20.261101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.698 [2024-11-05 03:30:20.261169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.698 [2024-11-05 03:30:20.261182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:56.698 [2024-11-05 03:30:20.261193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:56.698 [2024-11-05 03:30:20.261204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.698 [2024-11-05 03:30:20.261232] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:56.699 [2024-11-05 03:30:20.261255] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:56.699 [2024-11-05 03:30:20.261307] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:56.699 [2024-11-05 03:30:20.261326] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:56.699 [2024-11-05 03:30:20.261427] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:56.699 [2024-11-05 03:30:20.261441] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:56.699 [2024-11-05 03:30:20.261454] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:56.699 [2024-11-05 03:30:20.261471] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:56.699 [2024-11-05 03:30:20.261483] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:56.699 [2024-11-05 03:30:20.261496] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:56.699 [2024-11-05 03:30:20.261507] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:56.699 [2024-11-05 03:30:20.261517] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:56.699 [2024-11-05 03:30:20.261527] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:56.699 [2024-11-05 03:30:20.261538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.699 [2024-11-05 03:30:20.261548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:56.699 [2024-11-05 03:30:20.261559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:19:56.699 [2024-11-05 03:30:20.261569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.699 [2024-11-05 03:30:20.261646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.699 [2024-11-05 03:30:20.261661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:56.699 [2024-11-05 03:30:20.261671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:56.699 [2024-11-05 03:30:20.261681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.699 [2024-11-05 03:30:20.261772] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:56.699 [2024-11-05 03:30:20.261785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:56.699 [2024-11-05 03:30:20.261796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:56.699 [2024-11-05 03:30:20.261826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.699 [2024-11-05 03:30:20.261843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:56.699 [2024-11-05 03:30:20.261856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:56.699 [2024-11-05 03:30:20.261868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:56.699 [2024-11-05 03:30:20.261881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:56.699 [2024-11-05 03:30:20.261893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:56.699 [2024-11-05 03:30:20.261908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:56.699 [2024-11-05 03:30:20.261921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:56.699 [2024-11-05 03:30:20.261937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:56.699 [2024-11-05 03:30:20.261947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:56.699 [2024-11-05 03:30:20.261967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:56.699 [2024-11-05 03:30:20.261977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:56.699 [2024-11-05 03:30:20.261987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.699 [2024-11-05 03:30:20.261996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:56.699 [2024-11-05 03:30:20.262006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:56.699 [2024-11-05 03:30:20.262015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.699 [2024-11-05 03:30:20.262024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:56.699 [2024-11-05 03:30:20.262034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:56.699 [2024-11-05 03:30:20.262043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.699 [2024-11-05 03:30:20.262052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:56.699 [2024-11-05 03:30:20.262062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:56.699 [2024-11-05 03:30:20.262071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.699 [2024-11-05 03:30:20.262080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:56.699 [2024-11-05 03:30:20.262095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:56.699 [2024-11-05 03:30:20.262104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.699 [2024-11-05 03:30:20.262114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:56.699 [2024-11-05 03:30:20.262123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:56.699 [2024-11-05 03:30:20.262131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.699 [2024-11-05 03:30:20.262140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:56.699 [2024-11-05 03:30:20.262149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:56.699 [2024-11-05 03:30:20.262158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:56.699 [2024-11-05 03:30:20.262174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:56.699 [2024-11-05 03:30:20.262191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:56.699 [2024-11-05 03:30:20.262206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:56.699 [2024-11-05 03:30:20.262220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:56.699 [2024-11-05 03:30:20.262235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:56.699 [2024-11-05 03:30:20.262251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.699 [2024-11-05 03:30:20.262263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:56.699 [2024-11-05 03:30:20.262275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:56.699 [2024-11-05 03:30:20.262302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.699 [2024-11-05 03:30:20.262315] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:56.699 [2024-11-05 03:30:20.262329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:56.699 [2024-11-05 03:30:20.262349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:56.699 [2024-11-05 03:30:20.262362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.699 [2024-11-05 03:30:20.262375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:56.699 [2024-11-05 03:30:20.262388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:56.699 [2024-11-05 03:30:20.262400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:56.699 [2024-11-05 03:30:20.262412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:56.699 [2024-11-05 03:30:20.262425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:56.699 [2024-11-05 03:30:20.262440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:56.699 [2024-11-05 03:30:20.262454] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:56.699 [2024-11-05 03:30:20.262469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:56.699 [2024-11-05 03:30:20.262484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:56.699 [2024-11-05 03:30:20.262497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:56.699 [2024-11-05 03:30:20.262513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:56.699 [2024-11-05 03:30:20.262532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:56.699 [2024-11-05 03:30:20.262548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:56.699 [2024-11-05 03:30:20.262565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:56.699 [2024-11-05 03:30:20.262579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:56.699 [2024-11-05 03:30:20.262592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:56.699 [2024-11-05 03:30:20.262606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:56.699 [2024-11-05 03:30:20.262620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:56.699 [2024-11-05 03:30:20.262637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:56.699 [2024-11-05 03:30:20.262651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:56.699 [2024-11-05 03:30:20.262665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:56.699 [2024-11-05 03:30:20.262679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:56.699 [2024-11-05 03:30:20.262691] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:56.699 [2024-11-05 03:30:20.262714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:56.699 [2024-11-05 03:30:20.262725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:56.699 [2024-11-05 03:30:20.262741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:56.700 [2024-11-05 03:30:20.262756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:56.700 [2024-11-05 03:30:20.262783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:56.700 [2024-11-05 03:30:20.262800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.700 [2024-11-05 03:30:20.262818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:56.700 [2024-11-05 03:30:20.262830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:19:56.700 [2024-11-05 03:30:20.262840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.959 [2024-11-05 03:30:20.303096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.959 [2024-11-05 03:30:20.303360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:56.959 [2024-11-05 03:30:20.303388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.229 ms 00:19:56.959 [2024-11-05 03:30:20.303406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.959 [2024-11-05 03:30:20.303587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.959 [2024-11-05 03:30:20.303601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:56.959 [2024-11-05 03:30:20.303613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:56.959 [2024-11-05 03:30:20.303623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.959 [2024-11-05 03:30:20.361091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.959 [2024-11-05 03:30:20.361138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.959 [2024-11-05 03:30:20.361157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.536 ms 00:19:56.959 [2024-11-05 03:30:20.361168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.959 [2024-11-05 03:30:20.361317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.959 [2024-11-05 03:30:20.361331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.959 [2024-11-05 03:30:20.361343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:56.959 [2024-11-05 03:30:20.361353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.959 [2024-11-05 03:30:20.361785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.959 [2024-11-05 03:30:20.361799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.959 [2024-11-05 03:30:20.361811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:19:56.959 [2024-11-05 03:30:20.361825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.959 [2024-11-05 03:30:20.361978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.959 [2024-11-05 03:30:20.361995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.959 [2024-11-05 03:30:20.362012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:19:56.959 [2024-11-05 03:30:20.362027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.959 [2024-11-05 03:30:20.382467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.959 [2024-11-05 03:30:20.382662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.959 [2024-11-05 03:30:20.382688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.443 ms 00:19:56.959 [2024-11-05 03:30:20.382710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.959 [2024-11-05 03:30:20.402032] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:56.959 [2024-11-05 03:30:20.402204] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:56.959 [2024-11-05 03:30:20.402228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.959 [2024-11-05 03:30:20.402242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:56.959 [2024-11-05 03:30:20.402257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.397 ms 00:19:56.959 [2024-11-05 03:30:20.402270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.959 [2024-11-05 03:30:20.432543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.959 [2024-11-05 03:30:20.432687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:56.959 [2024-11-05 03:30:20.432724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.214 ms 00:19:56.959 [2024-11-05 03:30:20.432737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.959 [2024-11-05 03:30:20.451033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.959 [2024-11-05 03:30:20.451072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:56.959 [2024-11-05 03:30:20.451086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.233 ms 00:19:56.959 [2024-11-05 03:30:20.451096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.959 [2024-11-05 03:30:20.468855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.959 [2024-11-05 03:30:20.468890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:56.959 [2024-11-05 03:30:20.468903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.710 ms 00:19:56.959 [2024-11-05 03:30:20.468913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.959 [2024-11-05 03:30:20.469743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.959 [2024-11-05 03:30:20.469777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:56.959 [2024-11-05 03:30:20.469793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:19:56.959 [2024-11-05 03:30:20.469806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.218 [2024-11-05 03:30:20.556107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.218 [2024-11-05 03:30:20.556166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:57.218 [2024-11-05 03:30:20.556183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.400 ms 00:19:57.218 [2024-11-05 03:30:20.556195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.218 [2024-11-05 03:30:20.567953] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:57.218 [2024-11-05 03:30:20.584593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.218 [2024-11-05 03:30:20.584646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:57.218 [2024-11-05 03:30:20.584663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.307 ms 00:19:57.218 [2024-11-05 03:30:20.584674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.218 [2024-11-05 03:30:20.584821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.218 [2024-11-05 03:30:20.584835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:57.218 [2024-11-05 03:30:20.584847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:57.218 [2024-11-05 03:30:20.584858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.218 [2024-11-05 03:30:20.584915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.218 [2024-11-05 03:30:20.584927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:57.218 [2024-11-05 03:30:20.584938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:57.218 [2024-11-05 03:30:20.584948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.218 [2024-11-05 03:30:20.584977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.218 [2024-11-05 03:30:20.584991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:57.218 [2024-11-05 03:30:20.585002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:57.218 [2024-11-05 03:30:20.585012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.218 [2024-11-05 03:30:20.585049] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:57.218 [2024-11-05 03:30:20.585062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.218 [2024-11-05 03:30:20.585072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:57.219 [2024-11-05 03:30:20.585083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:57.219 [2024-11-05 03:30:20.585093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.219 [2024-11-05 03:30:20.621466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.219 [2024-11-05 03:30:20.621510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:57.219 [2024-11-05 03:30:20.621525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.410 ms 00:19:57.219 [2024-11-05 03:30:20.621536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.219 [2024-11-05 03:30:20.621664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.219 [2024-11-05 03:30:20.621678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:57.219 [2024-11-05 03:30:20.621690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:57.219 [2024-11-05 03:30:20.621700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.219 [2024-11-05 03:30:20.622746] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:57.219 [2024-11-05 03:30:20.627349] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.245 ms, result 0 00:19:57.219 [2024-11-05 03:30:20.627978] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:57.219 [2024-11-05 03:30:20.646968] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:58.154  [2024-11-05T03:30:22.673Z] Copying: 24/256 [MB] (24 MBps) [2024-11-05T03:30:24.051Z] Copying: 47/256 [MB] (23 MBps) [2024-11-05T03:30:24.988Z] Copying: 70/256 [MB] (22 MBps) [2024-11-05T03:30:25.927Z] Copying: 93/256 [MB] (22 MBps) [2024-11-05T03:30:26.863Z] Copying: 115/256 [MB] (22 MBps) [2024-11-05T03:30:27.798Z] Copying: 139/256 [MB] (23 MBps) [2024-11-05T03:30:28.735Z] Copying: 161/256 [MB] (22 MBps) [2024-11-05T03:30:29.669Z] Copying: 184/256 [MB] (22 MBps) [2024-11-05T03:30:31.045Z] Copying: 207/256 [MB] (22 MBps) [2024-11-05T03:30:31.982Z] Copying: 230/256 [MB] (23 MBps) [2024-11-05T03:30:31.982Z] Copying: 253/256 [MB] (22 MBps) [2024-11-05T03:30:31.982Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-05 03:30:31.768452] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:08.398 [2024-11-05 03:30:31.783244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.398 [2024-11-05 03:30:31.783441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:08.398 [2024-11-05 03:30:31.783607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:08.398 [2024-11-05 03:30:31.783652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.398 [2024-11-05 03:30:31.783725] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:08.398 [2024-11-05 03:30:31.788355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.398 [2024-11-05 03:30:31.788496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:08.398 [2024-11-05 03:30:31.788624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.504 ms 00:20:08.398 [2024-11-05 03:30:31.788666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.398 [2024-11-05 03:30:31.790822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.398 [2024-11-05 03:30:31.790972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:08.398 [2024-11-05 03:30:31.791070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.103 ms 00:20:08.398 [2024-11-05 03:30:31.791111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.398 [2024-11-05 03:30:31.797661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.398 [2024-11-05 03:30:31.797825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:08.398 [2024-11-05 03:30:31.797857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.507 ms 00:20:08.398 [2024-11-05 03:30:31.797870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.398 [2024-11-05 03:30:31.803323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.398 [2024-11-05 03:30:31.803483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:08.398 [2024-11-05 03:30:31.803505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.415 ms 00:20:08.398 [2024-11-05 03:30:31.803517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.398 [2024-11-05 03:30:31.838063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.398 [2024-11-05 03:30:31.838104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:08.398 [2024-11-05 03:30:31.838121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.525 ms 00:20:08.398 [2024-11-05 03:30:31.838133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.398 [2024-11-05 03:30:31.858826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.398 [2024-11-05 03:30:31.858878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:08.398 [2024-11-05 03:30:31.858894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.667 ms 00:20:08.398 [2024-11-05 03:30:31.858911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.398 [2024-11-05 03:30:31.859052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.398 [2024-11-05 03:30:31.859067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:08.398 [2024-11-05 03:30:31.859080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:20:08.398 [2024-11-05 03:30:31.859091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.398 [2024-11-05 03:30:31.894758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.398 [2024-11-05 03:30:31.894799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:08.398 [2024-11-05 03:30:31.894815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.705 ms 00:20:08.398 [2024-11-05 03:30:31.894826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.398 [2024-11-05 03:30:31.929149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.398 [2024-11-05 03:30:31.929319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:08.398 [2024-11-05 03:30:31.929343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.319 ms 00:20:08.398 [2024-11-05 03:30:31.929354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.398 [2024-11-05 03:30:31.963921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.398 [2024-11-05 03:30:31.963961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:08.398 [2024-11-05 03:30:31.963976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.544 ms 00:20:08.398 [2024-11-05 03:30:31.963987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.659 [2024-11-05 03:30:31.999395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.659 [2024-11-05 03:30:31.999438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:08.659 [2024-11-05 03:30:31.999454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.375 ms 00:20:08.659 [2024-11-05 03:30:31.999465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.659 [2024-11-05 03:30:31.999528] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:08.659 [2024-11-05 03:30:31.999549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:31.999992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:08.659 [2024-11-05 03:30:32.000373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:08.660 [2024-11-05 03:30:32.000897] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:08.660 [2024-11-05 03:30:32.000910] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24647937-08ab-4dc6-a95c-fd93b438c7ce 00:20:08.660 [2024-11-05 03:30:32.000924] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:08.660 [2024-11-05 03:30:32.000936] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:08.660 [2024-11-05 03:30:32.000948] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:08.660 [2024-11-05 03:30:32.000961] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:08.660 [2024-11-05 03:30:32.000973] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:08.660 [2024-11-05 03:30:32.000986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:08.660 [2024-11-05 03:30:32.000999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:08.660 [2024-11-05 03:30:32.001010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:08.660 [2024-11-05 03:30:32.001020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:08.660 [2024-11-05 03:30:32.001033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.660 [2024-11-05 03:30:32.001051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:08.660 [2024-11-05 03:30:32.001065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:20:08.660 [2024-11-05 03:30:32.001076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.660 [2024-11-05 03:30:32.022378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.660 [2024-11-05 03:30:32.022548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:08.660 [2024-11-05 03:30:32.022574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.312 ms 00:20:08.660 [2024-11-05 03:30:32.022587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.660 [2024-11-05 03:30:32.023187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.660 [2024-11-05 03:30:32.023207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:08.660 [2024-11-05 03:30:32.023221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:20:08.660 [2024-11-05 03:30:32.023233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.660 [2024-11-05 03:30:32.081483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.660 [2024-11-05 03:30:32.081651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:08.660 [2024-11-05 03:30:32.081676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.660 [2024-11-05 03:30:32.081691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.660 [2024-11-05 03:30:32.081794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.660 [2024-11-05 03:30:32.081809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:08.660 [2024-11-05 03:30:32.081823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.660 [2024-11-05 03:30:32.081836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.660 [2024-11-05 03:30:32.081907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.660 [2024-11-05 03:30:32.081923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:08.660 [2024-11-05 03:30:32.081937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.660 [2024-11-05 03:30:32.081950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.660 [2024-11-05 03:30:32.081974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.660 [2024-11-05 03:30:32.081994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:08.660 [2024-11-05 03:30:32.082007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.660 [2024-11-05 03:30:32.082020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.660 [2024-11-05 03:30:32.216679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.660 [2024-11-05 03:30:32.216935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:08.660 [2024-11-05 03:30:32.216963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.660 [2024-11-05 03:30:32.216978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.919 [2024-11-05 03:30:32.325562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.920 [2024-11-05 03:30:32.325643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:08.920 [2024-11-05 03:30:32.325663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.920 [2024-11-05 03:30:32.325676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.920 [2024-11-05 03:30:32.325828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.920 [2024-11-05 03:30:32.325844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:08.920 [2024-11-05 03:30:32.325858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.920 [2024-11-05 03:30:32.325870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.920 [2024-11-05 03:30:32.325906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.920 [2024-11-05 03:30:32.325920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:08.920 [2024-11-05 03:30:32.325940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.920 [2024-11-05 03:30:32.325953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.920 [2024-11-05 03:30:32.326089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.920 [2024-11-05 03:30:32.326106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:08.920 [2024-11-05 03:30:32.326119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.920 [2024-11-05 03:30:32.326133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.920 [2024-11-05 03:30:32.326190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.920 [2024-11-05 03:30:32.326206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:08.920 [2024-11-05 03:30:32.326219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.920 [2024-11-05 03:30:32.326237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.920 [2024-11-05 03:30:32.326291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.920 [2024-11-05 03:30:32.326585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:08.920 [2024-11-05 03:30:32.326633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.920 [2024-11-05 03:30:32.326670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.920 [2024-11-05 03:30:32.326789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.920 [2024-11-05 03:30:32.326902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:08.920 [2024-11-05 03:30:32.326957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.920 [2024-11-05 03:30:32.326993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.920 [2024-11-05 03:30:32.327248] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 544.860 ms, result 0 00:20:10.299 00:20:10.299 00:20:10.299 03:30:33 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=75855 00:20:10.299 03:30:33 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:10.299 03:30:33 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 75855 00:20:10.299 03:30:33 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75855 ']' 00:20:10.299 03:30:33 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.299 03:30:33 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:10.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.299 03:30:33 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.299 03:30:33 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:10.299 03:30:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:10.299 [2024-11-05 03:30:33.707631] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:20:10.299 [2024-11-05 03:30:33.707942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75855 ] 00:20:10.558 [2024-11-05 03:30:33.890081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.558 [2024-11-05 03:30:34.019865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.495 03:30:35 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:11.495 03:30:35 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:20:11.495 03:30:35 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:11.754 [2024-11-05 03:30:35.216586] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:11.754 [2024-11-05 03:30:35.216936] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:12.013 [2024-11-05 03:30:35.390326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.013 [2024-11-05 03:30:35.390385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:12.014 [2024-11-05 03:30:35.390411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:12.014 [2024-11-05 03:30:35.390424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.014 [2024-11-05 03:30:35.394442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.014 [2024-11-05 03:30:35.394483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:12.014 [2024-11-05 03:30:35.394501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.999 ms 00:20:12.014 [2024-11-05 03:30:35.394513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.014 [2024-11-05 03:30:35.394633] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:12.014 [2024-11-05 03:30:35.395693] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:12.014 [2024-11-05 03:30:35.395735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.014 [2024-11-05 03:30:35.395749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:12.014 [2024-11-05 03:30:35.395765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:20:12.014 [2024-11-05 03:30:35.395777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.014 [2024-11-05 03:30:35.398430] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:12.014 [2024-11-05 03:30:35.419378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.014 [2024-11-05 03:30:35.419436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:12.014 [2024-11-05 03:30:35.419456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.987 ms 00:20:12.014 [2024-11-05 03:30:35.419478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.014 [2024-11-05 03:30:35.419632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.014 [2024-11-05 03:30:35.419657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:12.014 [2024-11-05 03:30:35.419673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:12.014 [2024-11-05 03:30:35.419694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.014 [2024-11-05 03:30:35.432846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.014 [2024-11-05 03:30:35.432902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:12.014 [2024-11-05 03:30:35.432920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.103 ms 00:20:12.014 [2024-11-05 03:30:35.432943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.014 [2024-11-05 03:30:35.433147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.014 [2024-11-05 03:30:35.433174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:12.014 [2024-11-05 03:30:35.433190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:20:12.014 [2024-11-05 03:30:35.433211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.014 [2024-11-05 03:30:35.433268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.014 [2024-11-05 03:30:35.433317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:12.014 [2024-11-05 03:30:35.433333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:12.014 [2024-11-05 03:30:35.433355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.014 [2024-11-05 03:30:35.433394] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:12.014 [2024-11-05 03:30:35.439825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.014 [2024-11-05 03:30:35.439866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:12.014 [2024-11-05 03:30:35.439891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.443 ms 00:20:12.014 [2024-11-05 03:30:35.439905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.014 [2024-11-05 03:30:35.439989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.014 [2024-11-05 03:30:35.440007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:12.014 [2024-11-05 03:30:35.440029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:12.014 [2024-11-05 03:30:35.440048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.014 [2024-11-05 03:30:35.440082] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:12.014 [2024-11-05 03:30:35.440113] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:12.014 [2024-11-05 03:30:35.440174] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:12.014 [2024-11-05 03:30:35.440199] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:12.014 [2024-11-05 03:30:35.440332] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:12.014 [2024-11-05 03:30:35.440352] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:12.014 [2024-11-05 03:30:35.440377] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:12.014 [2024-11-05 03:30:35.440421] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:12.014 [2024-11-05 03:30:35.440441] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:12.014 [2024-11-05 03:30:35.440457] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:12.014 [2024-11-05 03:30:35.440476] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:12.014 [2024-11-05 03:30:35.440489] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:12.014 [2024-11-05 03:30:35.440510] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:12.014 [2024-11-05 03:30:35.440524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.014 [2024-11-05 03:30:35.440542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:12.014 [2024-11-05 03:30:35.440557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:20:12.014 [2024-11-05 03:30:35.440574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.014 [2024-11-05 03:30:35.440666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.014 [2024-11-05 03:30:35.440686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:12.014 [2024-11-05 03:30:35.440700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:12.014 [2024-11-05 03:30:35.440722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.014 [2024-11-05 03:30:35.440844] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:12.014 [2024-11-05 03:30:35.440889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:12.014 [2024-11-05 03:30:35.440904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:12.014 [2024-11-05 03:30:35.440926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.014 [2024-11-05 03:30:35.440940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:12.014 [2024-11-05 03:30:35.440960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:12.014 [2024-11-05 03:30:35.440973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:12.014 [2024-11-05 03:30:35.441000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:12.014 [2024-11-05 03:30:35.441014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:12.014 [2024-11-05 03:30:35.441033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:12.014 [2024-11-05 03:30:35.441046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:12.014 [2024-11-05 03:30:35.441077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:12.014 [2024-11-05 03:30:35.441090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:12.014 [2024-11-05 03:30:35.441108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:12.014 [2024-11-05 03:30:35.441121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:12.014 [2024-11-05 03:30:35.441139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.014 [2024-11-05 03:30:35.441151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:12.014 [2024-11-05 03:30:35.441171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:12.014 [2024-11-05 03:30:35.441184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.014 [2024-11-05 03:30:35.441202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:12.014 [2024-11-05 03:30:35.441228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:12.014 [2024-11-05 03:30:35.441247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.014 [2024-11-05 03:30:35.441259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:12.014 [2024-11-05 03:30:35.441281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:12.014 [2024-11-05 03:30:35.441305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.014 [2024-11-05 03:30:35.441321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:12.014 [2024-11-05 03:30:35.441334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:12.014 [2024-11-05 03:30:35.441348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.014 [2024-11-05 03:30:35.441361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:12.014 [2024-11-05 03:30:35.441376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:12.014 [2024-11-05 03:30:35.441388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.014 [2024-11-05 03:30:35.441405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:12.014 [2024-11-05 03:30:35.441417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:12.014 [2024-11-05 03:30:35.441432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:12.014 [2024-11-05 03:30:35.441444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:12.014 [2024-11-05 03:30:35.441459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:12.014 [2024-11-05 03:30:35.441470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:12.014 [2024-11-05 03:30:35.441486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:12.014 [2024-11-05 03:30:35.441498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:12.014 [2024-11-05 03:30:35.441517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.014 [2024-11-05 03:30:35.441529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:12.014 [2024-11-05 03:30:35.441544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:12.014 [2024-11-05 03:30:35.441555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.015 [2024-11-05 03:30:35.441570] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:12.015 [2024-11-05 03:30:35.441583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:12.015 [2024-11-05 03:30:35.441604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:12.015 [2024-11-05 03:30:35.441615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.015 [2024-11-05 03:30:35.441631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:12.015 [2024-11-05 03:30:35.441643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:12.015 [2024-11-05 03:30:35.441660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:12.015 [2024-11-05 03:30:35.441672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:12.015 [2024-11-05 03:30:35.441692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:12.015 [2024-11-05 03:30:35.441704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:12.015 [2024-11-05 03:30:35.441724] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:12.015 [2024-11-05 03:30:35.441740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:12.015 [2024-11-05 03:30:35.441768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:12.015 [2024-11-05 03:30:35.441782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:12.015 [2024-11-05 03:30:35.441801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:12.015 [2024-11-05 03:30:35.441814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:12.015 [2024-11-05 03:30:35.441834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:12.015 [2024-11-05 03:30:35.441847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:12.015 [2024-11-05 03:30:35.441866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:12.015 [2024-11-05 03:30:35.441879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:12.015 [2024-11-05 03:30:35.441898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:12.015 [2024-11-05 03:30:35.441911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:12.015 [2024-11-05 03:30:35.441930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:12.015 [2024-11-05 03:30:35.441943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:12.015 [2024-11-05 03:30:35.441963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:12.015 [2024-11-05 03:30:35.441977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:12.015 [2024-11-05 03:30:35.441996] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:12.015 [2024-11-05 03:30:35.442011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:12.015 [2024-11-05 03:30:35.442035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:12.015 [2024-11-05 03:30:35.442049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:12.015 [2024-11-05 03:30:35.442069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:12.015 [2024-11-05 03:30:35.442082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:12.015 [2024-11-05 03:30:35.442102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.015 [2024-11-05 03:30:35.442116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:12.015 [2024-11-05 03:30:35.442136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:20:12.015 [2024-11-05 03:30:35.442148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.015 [2024-11-05 03:30:35.497363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.015 [2024-11-05 03:30:35.497582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:12.015 [2024-11-05 03:30:35.497709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.216 ms 00:20:12.015 [2024-11-05 03:30:35.497762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.015 [2024-11-05 03:30:35.498097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.015 [2024-11-05 03:30:35.498280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:12.015 [2024-11-05 03:30:35.498413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:12.015 [2024-11-05 03:30:35.498461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.015 [2024-11-05 03:30:35.555789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.015 [2024-11-05 03:30:35.555998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:12.015 [2024-11-05 03:30:35.556034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.141 ms 00:20:12.015 [2024-11-05 03:30:35.556047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.015 [2024-11-05 03:30:35.556168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.015 [2024-11-05 03:30:35.556184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:12.015 [2024-11-05 03:30:35.556206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:12.015 [2024-11-05 03:30:35.556219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.015 [2024-11-05 03:30:35.557066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.015 [2024-11-05 03:30:35.557091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:12.015 [2024-11-05 03:30:35.557122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:20:12.015 [2024-11-05 03:30:35.557136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.015 [2024-11-05 03:30:35.557320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.015 [2024-11-05 03:30:35.557338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:12.015 [2024-11-05 03:30:35.557360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:20:12.015 [2024-11-05 03:30:35.557374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.015 [2024-11-05 03:30:35.586345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.015 [2024-11-05 03:30:35.586527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:12.015 [2024-11-05 03:30:35.586561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.979 ms 00:20:12.015 [2024-11-05 03:30:35.586576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.274 [2024-11-05 03:30:35.609274] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:12.274 [2024-11-05 03:30:35.609460] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:12.274 [2024-11-05 03:30:35.609501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.274 [2024-11-05 03:30:35.609517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:12.274 [2024-11-05 03:30:35.609536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.774 ms 00:20:12.274 [2024-11-05 03:30:35.609550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.274 [2024-11-05 03:30:35.643421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.274 [2024-11-05 03:30:35.643594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:12.274 [2024-11-05 03:30:35.643628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.789 ms 00:20:12.274 [2024-11-05 03:30:35.643643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.274 [2024-11-05 03:30:35.663729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.274 [2024-11-05 03:30:35.663775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:12.274 [2024-11-05 03:30:35.663809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.961 ms 00:20:12.274 [2024-11-05 03:30:35.663821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.274 [2024-11-05 03:30:35.683073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.274 [2024-11-05 03:30:35.683260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:12.274 [2024-11-05 03:30:35.683316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.165 ms 00:20:12.274 [2024-11-05 03:30:35.683332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.274 [2024-11-05 03:30:35.684393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.274 [2024-11-05 03:30:35.684430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:12.274 [2024-11-05 03:30:35.684453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:20:12.274 [2024-11-05 03:30:35.684468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.274 [2024-11-05 03:30:35.801260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.274 [2024-11-05 03:30:35.801377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:12.274 [2024-11-05 03:30:35.801410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.933 ms 00:20:12.274 [2024-11-05 03:30:35.801424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.274 [2024-11-05 03:30:35.813563] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:12.274 [2024-11-05 03:30:35.841079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.274 [2024-11-05 03:30:35.841415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:12.274 [2024-11-05 03:30:35.841459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.564 ms 00:20:12.274 [2024-11-05 03:30:35.841497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.274 [2024-11-05 03:30:35.841726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.274 [2024-11-05 03:30:35.841752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:12.274 [2024-11-05 03:30:35.841769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:12.274 [2024-11-05 03:30:35.841790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.274 [2024-11-05 03:30:35.841871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.274 [2024-11-05 03:30:35.841896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:12.274 [2024-11-05 03:30:35.841913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:12.274 [2024-11-05 03:30:35.841933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.274 [2024-11-05 03:30:35.841975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.274 [2024-11-05 03:30:35.842009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:12.274 [2024-11-05 03:30:35.842024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:12.274 [2024-11-05 03:30:35.842049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.274 [2024-11-05 03:30:35.842106] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:12.274 [2024-11-05 03:30:35.842136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.274 [2024-11-05 03:30:35.842150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:12.274 [2024-11-05 03:30:35.842197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:12.274 [2024-11-05 03:30:35.842212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.533 [2024-11-05 03:30:35.883144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.533 [2024-11-05 03:30:35.883195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:12.533 [2024-11-05 03:30:35.883223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.942 ms 00:20:12.533 [2024-11-05 03:30:35.883238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.533 [2024-11-05 03:30:35.883420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.533 [2024-11-05 03:30:35.883439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:12.533 [2024-11-05 03:30:35.883461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:12.533 [2024-11-05 03:30:35.883483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.533 [2024-11-05 03:30:35.884967] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:12.533 [2024-11-05 03:30:35.889702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 495.015 ms, result 0 00:20:12.533 [2024-11-05 03:30:35.891573] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:12.533 Some configs were skipped because the RPC state that can call them passed over. 00:20:12.533 03:30:35 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:12.792 [2024-11-05 03:30:36.148204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.792 [2024-11-05 03:30:36.148421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:12.792 [2024-11-05 03:30:36.148448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:20:12.792 [2024-11-05 03:30:36.148467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.792 [2024-11-05 03:30:36.148518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.048 ms, result 0 00:20:12.792 true 00:20:12.792 03:30:36 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:13.051 [2024-11-05 03:30:36.379582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.051 [2024-11-05 03:30:36.379646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:13.051 [2024-11-05 03:30:36.379672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.161 ms 00:20:13.051 [2024-11-05 03:30:36.379686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.051 [2024-11-05 03:30:36.379742] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.326 ms, result 0 00:20:13.051 true 00:20:13.051 03:30:36 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 75855 00:20:13.051 03:30:36 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75855 ']' 00:20:13.051 03:30:36 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75855 00:20:13.051 03:30:36 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:20:13.051 03:30:36 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:13.052 03:30:36 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75855 00:20:13.052 03:30:36 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:13.052 03:30:36 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:13.052 03:30:36 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75855' 00:20:13.052 killing process with pid 75855 00:20:13.052 03:30:36 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75855 00:20:13.052 03:30:36 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75855 00:20:14.430 [2024-11-05 03:30:37.772064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.430 [2024-11-05 03:30:37.772351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:14.430 [2024-11-05 03:30:37.772503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:14.430 [2024-11-05 03:30:37.772530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.430 [2024-11-05 03:30:37.772598] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:14.430 [2024-11-05 03:30:37.777668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.430 [2024-11-05 03:30:37.777714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:14.430 [2024-11-05 03:30:37.777736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.047 ms 00:20:14.430 [2024-11-05 03:30:37.777768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.430 [2024-11-05 03:30:37.778118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.430 [2024-11-05 03:30:37.778149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:14.430 [2024-11-05 03:30:37.778166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:20:14.430 [2024-11-05 03:30:37.778179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.430 [2024-11-05 03:30:37.781788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.430 [2024-11-05 03:30:37.781833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:14.431 [2024-11-05 03:30:37.781873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.584 ms 00:20:14.431 [2024-11-05 03:30:37.781889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.431 [2024-11-05 03:30:37.788155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.431 [2024-11-05 03:30:37.788200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:14.431 [2024-11-05 03:30:37.788220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.193 ms 00:20:14.431 [2024-11-05 03:30:37.788233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.431 [2024-11-05 03:30:37.805163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.431 [2024-11-05 03:30:37.805211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:14.431 [2024-11-05 03:30:37.805238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.801 ms 00:20:14.431 [2024-11-05 03:30:37.805265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.431 [2024-11-05 03:30:37.817560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.431 [2024-11-05 03:30:37.817618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:14.431 [2024-11-05 03:30:37.817643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.218 ms 00:20:14.431 [2024-11-05 03:30:37.817656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.431 [2024-11-05 03:30:37.817829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.431 [2024-11-05 03:30:37.817846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:14.431 [2024-11-05 03:30:37.817864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:14.431 [2024-11-05 03:30:37.817878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.431 [2024-11-05 03:30:37.834873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.431 [2024-11-05 03:30:37.834914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:14.431 [2024-11-05 03:30:37.834943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.988 ms 00:20:14.431 [2024-11-05 03:30:37.834957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.431 [2024-11-05 03:30:37.851087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.431 [2024-11-05 03:30:37.851127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:14.431 [2024-11-05 03:30:37.851158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.064 ms 00:20:14.431 [2024-11-05 03:30:37.851171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.431 [2024-11-05 03:30:37.866776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.431 [2024-11-05 03:30:37.866813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:14.431 [2024-11-05 03:30:37.866837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.553 ms 00:20:14.431 [2024-11-05 03:30:37.866850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.431 [2024-11-05 03:30:37.882535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.431 [2024-11-05 03:30:37.882592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:14.431 [2024-11-05 03:30:37.882617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.601 ms 00:20:14.431 [2024-11-05 03:30:37.882630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.431 [2024-11-05 03:30:37.882708] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:14.431 [2024-11-05 03:30:37.882730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.882766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.882779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.882800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.882830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.882857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.882871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.882893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.882907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.882927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.882943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.882964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.882978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:14.431 [2024-11-05 03:30:37.883911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.883928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.883941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.883957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.883971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.883988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:14.432 [2024-11-05 03:30:37.884660] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:14.432 [2024-11-05 03:30:37.884694] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24647937-08ab-4dc6-a95c-fd93b438c7ce 00:20:14.432 [2024-11-05 03:30:37.884725] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:14.432 [2024-11-05 03:30:37.884756] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:14.432 [2024-11-05 03:30:37.884770] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:14.432 [2024-11-05 03:30:37.884791] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:14.432 [2024-11-05 03:30:37.884804] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:14.432 [2024-11-05 03:30:37.884824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:14.432 [2024-11-05 03:30:37.884837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:14.432 [2024-11-05 03:30:37.884856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:14.432 [2024-11-05 03:30:37.884869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:14.432 [2024-11-05 03:30:37.884889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.432 [2024-11-05 03:30:37.884902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:14.432 [2024-11-05 03:30:37.884923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.196 ms 00:20:14.432 [2024-11-05 03:30:37.884937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.432 [2024-11-05 03:30:37.908168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.432 [2024-11-05 03:30:37.908340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:14.432 [2024-11-05 03:30:37.908384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.217 ms 00:20:14.432 [2024-11-05 03:30:37.908399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.432 [2024-11-05 03:30:37.909103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.432 [2024-11-05 03:30:37.909133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:14.432 [2024-11-05 03:30:37.909154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:20:14.432 [2024-11-05 03:30:37.909175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.432 [2024-11-05 03:30:37.989924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.432 [2024-11-05 03:30:37.989965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.432 [2024-11-05 03:30:37.990006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.432 [2024-11-05 03:30:37.990021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.432 [2024-11-05 03:30:37.990182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.432 [2024-11-05 03:30:37.990200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.432 [2024-11-05 03:30:37.990230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.432 [2024-11-05 03:30:37.990266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.432 [2024-11-05 03:30:37.990376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.432 [2024-11-05 03:30:37.990393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.432 [2024-11-05 03:30:37.990414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.432 [2024-11-05 03:30:37.990445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.432 [2024-11-05 03:30:37.990476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.432 [2024-11-05 03:30:37.990490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.432 [2024-11-05 03:30:37.990508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.432 [2024-11-05 03:30:37.990522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.691 [2024-11-05 03:30:38.133197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.691 [2024-11-05 03:30:38.133308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.691 [2024-11-05 03:30:38.133340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.691 [2024-11-05 03:30:38.133355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.691 [2024-11-05 03:30:38.245485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.691 [2024-11-05 03:30:38.245808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.691 [2024-11-05 03:30:38.245846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.691 [2024-11-05 03:30:38.245867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.691 [2024-11-05 03:30:38.246044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.691 [2024-11-05 03:30:38.246061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.691 [2024-11-05 03:30:38.246083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.692 [2024-11-05 03:30:38.246096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.692 [2024-11-05 03:30:38.246140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.692 [2024-11-05 03:30:38.246155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.692 [2024-11-05 03:30:38.246172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.692 [2024-11-05 03:30:38.246186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.692 [2024-11-05 03:30:38.246385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.692 [2024-11-05 03:30:38.246403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.692 [2024-11-05 03:30:38.246421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.692 [2024-11-05 03:30:38.246434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.692 [2024-11-05 03:30:38.246495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.692 [2024-11-05 03:30:38.246511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:14.692 [2024-11-05 03:30:38.246529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.692 [2024-11-05 03:30:38.246542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.692 [2024-11-05 03:30:38.246601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.692 [2024-11-05 03:30:38.246620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.692 [2024-11-05 03:30:38.246641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.692 [2024-11-05 03:30:38.246654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.692 [2024-11-05 03:30:38.246749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.692 [2024-11-05 03:30:38.246765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.692 [2024-11-05 03:30:38.246783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.692 [2024-11-05 03:30:38.246796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.692 [2024-11-05 03:30:38.247019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 475.683 ms, result 0 00:20:16.069 03:30:39 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:16.069 03:30:39 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:16.069 [2024-11-05 03:30:39.537119] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:20:16.069 [2024-11-05 03:30:39.537259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75924 ] 00:20:16.327 [2024-11-05 03:30:39.730496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.327 [2024-11-05 03:30:39.881520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.893 [2024-11-05 03:30:40.322553] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.893 [2024-11-05 03:30:40.322646] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.154 [2024-11-05 03:30:40.493939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.154 [2024-11-05 03:30:40.494010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:17.154 [2024-11-05 03:30:40.494030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:17.154 [2024-11-05 03:30:40.494044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.154 [2024-11-05 03:30:40.497932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.154 [2024-11-05 03:30:40.497989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.154 [2024-11-05 03:30:40.498006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.859 ms 00:20:17.154 [2024-11-05 03:30:40.498019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.154 [2024-11-05 03:30:40.498147] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:17.154 [2024-11-05 03:30:40.499237] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:17.154 [2024-11-05 03:30:40.499282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.154 [2024-11-05 03:30:40.499309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.154 [2024-11-05 03:30:40.499325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.146 ms 00:20:17.154 [2024-11-05 03:30:40.499338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.154 [2024-11-05 03:30:40.501951] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:17.154 [2024-11-05 03:30:40.524046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.154 [2024-11-05 03:30:40.524099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:17.154 [2024-11-05 03:30:40.524118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.131 ms 00:20:17.154 [2024-11-05 03:30:40.524132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.154 [2024-11-05 03:30:40.524256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.154 [2024-11-05 03:30:40.524275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:17.154 [2024-11-05 03:30:40.524314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:17.154 [2024-11-05 03:30:40.524328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.154 [2024-11-05 03:30:40.537280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.154 [2024-11-05 03:30:40.537326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.154 [2024-11-05 03:30:40.537342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.907 ms 00:20:17.154 [2024-11-05 03:30:40.537355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.154 [2024-11-05 03:30:40.537506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.154 [2024-11-05 03:30:40.537525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.154 [2024-11-05 03:30:40.537539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:17.154 [2024-11-05 03:30:40.537552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.154 [2024-11-05 03:30:40.537587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.154 [2024-11-05 03:30:40.537607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:17.154 [2024-11-05 03:30:40.537621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:17.154 [2024-11-05 03:30:40.537651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.154 [2024-11-05 03:30:40.537681] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:17.154 [2024-11-05 03:30:40.543963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.154 [2024-11-05 03:30:40.544004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.154 [2024-11-05 03:30:40.544020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.300 ms 00:20:17.154 [2024-11-05 03:30:40.544035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.154 [2024-11-05 03:30:40.544099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.154 [2024-11-05 03:30:40.544114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:17.154 [2024-11-05 03:30:40.544129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:17.154 [2024-11-05 03:30:40.544143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.154 [2024-11-05 03:30:40.544175] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:17.154 [2024-11-05 03:30:40.544212] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:17.154 [2024-11-05 03:30:40.544256] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:17.154 [2024-11-05 03:30:40.544308] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:17.154 [2024-11-05 03:30:40.544452] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:17.154 [2024-11-05 03:30:40.544471] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:17.154 [2024-11-05 03:30:40.544488] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:17.154 [2024-11-05 03:30:40.544504] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:17.154 [2024-11-05 03:30:40.544525] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:17.154 [2024-11-05 03:30:40.544540] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:17.154 [2024-11-05 03:30:40.544554] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:17.154 [2024-11-05 03:30:40.544566] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:17.154 [2024-11-05 03:30:40.544579] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:17.154 [2024-11-05 03:30:40.544594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.154 [2024-11-05 03:30:40.544607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:17.154 [2024-11-05 03:30:40.544621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:20:17.154 [2024-11-05 03:30:40.544634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.154 [2024-11-05 03:30:40.544734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.154 [2024-11-05 03:30:40.544749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:17.154 [2024-11-05 03:30:40.544767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:17.154 [2024-11-05 03:30:40.544779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.154 [2024-11-05 03:30:40.544895] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:17.154 [2024-11-05 03:30:40.544912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:17.154 [2024-11-05 03:30:40.544926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.154 [2024-11-05 03:30:40.544940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.154 [2024-11-05 03:30:40.544954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:17.154 [2024-11-05 03:30:40.544967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:17.154 [2024-11-05 03:30:40.544979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:17.154 [2024-11-05 03:30:40.544994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:17.154 [2024-11-05 03:30:40.545007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:17.154 [2024-11-05 03:30:40.545036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.154 [2024-11-05 03:30:40.545048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:17.154 [2024-11-05 03:30:40.545060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:17.154 [2024-11-05 03:30:40.545072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.154 [2024-11-05 03:30:40.545099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:17.154 [2024-11-05 03:30:40.545110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:17.154 [2024-11-05 03:30:40.545123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.154 [2024-11-05 03:30:40.545135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:17.154 [2024-11-05 03:30:40.545149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:17.154 [2024-11-05 03:30:40.545161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.154 [2024-11-05 03:30:40.545173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:17.154 [2024-11-05 03:30:40.545185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:17.154 [2024-11-05 03:30:40.545197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.154 [2024-11-05 03:30:40.545209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:17.154 [2024-11-05 03:30:40.545221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:17.154 [2024-11-05 03:30:40.545232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.154 [2024-11-05 03:30:40.545243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:17.154 [2024-11-05 03:30:40.545255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:17.154 [2024-11-05 03:30:40.545267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.154 [2024-11-05 03:30:40.545278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:17.154 [2024-11-05 03:30:40.545289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:17.154 [2024-11-05 03:30:40.545300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.154 [2024-11-05 03:30:40.545311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:17.154 [2024-11-05 03:30:40.545322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:17.154 [2024-11-05 03:30:40.545333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.155 [2024-11-05 03:30:40.545344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:17.155 [2024-11-05 03:30:40.545368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:17.155 [2024-11-05 03:30:40.545381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.155 [2024-11-05 03:30:40.545393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:17.155 [2024-11-05 03:30:40.545405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:17.155 [2024-11-05 03:30:40.545416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.155 [2024-11-05 03:30:40.545427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:17.155 [2024-11-05 03:30:40.545439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:17.155 [2024-11-05 03:30:40.545452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.155 [2024-11-05 03:30:40.545464] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:17.155 [2024-11-05 03:30:40.545477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:17.155 [2024-11-05 03:30:40.545490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.155 [2024-11-05 03:30:40.545507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.155 [2024-11-05 03:30:40.545521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:17.155 [2024-11-05 03:30:40.545533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:17.155 [2024-11-05 03:30:40.545546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:17.155 [2024-11-05 03:30:40.545558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:17.155 [2024-11-05 03:30:40.545569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:17.155 [2024-11-05 03:30:40.545581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:17.155 [2024-11-05 03:30:40.545595] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:17.155 [2024-11-05 03:30:40.545626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.155 [2024-11-05 03:30:40.545641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:17.155 [2024-11-05 03:30:40.545655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:17.155 [2024-11-05 03:30:40.545668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:17.155 [2024-11-05 03:30:40.545681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:17.155 [2024-11-05 03:30:40.545696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:17.155 [2024-11-05 03:30:40.545709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:17.155 [2024-11-05 03:30:40.545723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:17.155 [2024-11-05 03:30:40.545737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:17.155 [2024-11-05 03:30:40.545750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:17.155 [2024-11-05 03:30:40.545763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:17.155 [2024-11-05 03:30:40.545776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:17.155 [2024-11-05 03:30:40.545789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:17.155 [2024-11-05 03:30:40.545802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:17.155 [2024-11-05 03:30:40.545815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:17.155 [2024-11-05 03:30:40.545828] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:17.155 [2024-11-05 03:30:40.545843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.155 [2024-11-05 03:30:40.545858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:17.155 [2024-11-05 03:30:40.545872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:17.155 [2024-11-05 03:30:40.545885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:17.155 [2024-11-05 03:30:40.545898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:17.155 [2024-11-05 03:30:40.545912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.155 [2024-11-05 03:30:40.545925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:17.155 [2024-11-05 03:30:40.545944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:20:17.155 [2024-11-05 03:30:40.545958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.155 [2024-11-05 03:30:40.598612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.155 [2024-11-05 03:30:40.598836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.155 [2024-11-05 03:30:40.599020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.659 ms 00:20:17.155 [2024-11-05 03:30:40.599069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.155 [2024-11-05 03:30:40.599323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.155 [2024-11-05 03:30:40.599380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:17.155 [2024-11-05 03:30:40.599502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:17.155 [2024-11-05 03:30:40.599546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.155 [2024-11-05 03:30:40.666918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.155 [2024-11-05 03:30:40.667105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.155 [2024-11-05 03:30:40.667274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.412 ms 00:20:17.155 [2024-11-05 03:30:40.667336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.155 [2024-11-05 03:30:40.667477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.155 [2024-11-05 03:30:40.667683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.155 [2024-11-05 03:30:40.667730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:17.155 [2024-11-05 03:30:40.667769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.155 [2024-11-05 03:30:40.668600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.155 [2024-11-05 03:30:40.668743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.155 [2024-11-05 03:30:40.668885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:20:17.155 [2024-11-05 03:30:40.668938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.155 [2024-11-05 03:30:40.669173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.155 [2024-11-05 03:30:40.669229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.155 [2024-11-05 03:30:40.669359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:17.155 [2024-11-05 03:30:40.669404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.155 [2024-11-05 03:30:40.694206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.155 [2024-11-05 03:30:40.694375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.155 [2024-11-05 03:30:40.694564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.775 ms 00:20:17.155 [2024-11-05 03:30:40.694610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.155 [2024-11-05 03:30:40.716110] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:17.155 [2024-11-05 03:30:40.716298] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:17.155 [2024-11-05 03:30:40.716478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.155 [2024-11-05 03:30:40.716520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:17.155 [2024-11-05 03:30:40.716557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.693 ms 00:20:17.155 [2024-11-05 03:30:40.716594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.415 [2024-11-05 03:30:40.750011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.415 [2024-11-05 03:30:40.750199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:17.415 [2024-11-05 03:30:40.750369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.135 ms 00:20:17.415 [2024-11-05 03:30:40.750420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.415 [2024-11-05 03:30:40.770127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.415 [2024-11-05 03:30:40.770282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:17.415 [2024-11-05 03:30:40.770412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.614 ms 00:20:17.415 [2024-11-05 03:30:40.770543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.415 [2024-11-05 03:30:40.789826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.415 [2024-11-05 03:30:40.789998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:17.415 [2024-11-05 03:30:40.790079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.189 ms 00:20:17.415 [2024-11-05 03:30:40.790120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.415 [2024-11-05 03:30:40.790955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.415 [2024-11-05 03:30:40.790993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:17.415 [2024-11-05 03:30:40.791010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:20:17.415 [2024-11-05 03:30:40.791023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.415 [2024-11-05 03:30:40.889973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.415 [2024-11-05 03:30:40.890255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:17.415 [2024-11-05 03:30:40.890305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.072 ms 00:20:17.415 [2024-11-05 03:30:40.890321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.415 [2024-11-05 03:30:40.901645] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:17.415 [2024-11-05 03:30:40.927147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.415 [2024-11-05 03:30:40.927440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:17.415 [2024-11-05 03:30:40.927473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.768 ms 00:20:17.415 [2024-11-05 03:30:40.927498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.415 [2024-11-05 03:30:40.927680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.415 [2024-11-05 03:30:40.927697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:17.415 [2024-11-05 03:30:40.927712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:17.415 [2024-11-05 03:30:40.927724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.415 [2024-11-05 03:30:40.927801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.415 [2024-11-05 03:30:40.927816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:17.415 [2024-11-05 03:30:40.927830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:17.415 [2024-11-05 03:30:40.927850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.415 [2024-11-05 03:30:40.927890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.415 [2024-11-05 03:30:40.927905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:17.415 [2024-11-05 03:30:40.927918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:17.415 [2024-11-05 03:30:40.927930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.415 [2024-11-05 03:30:40.927976] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:17.415 [2024-11-05 03:30:40.927992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.415 [2024-11-05 03:30:40.928005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:17.415 [2024-11-05 03:30:40.928018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:17.415 [2024-11-05 03:30:40.928030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.415 [2024-11-05 03:30:40.966700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.415 [2024-11-05 03:30:40.966867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:17.415 [2024-11-05 03:30:40.966894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.695 ms 00:20:17.415 [2024-11-05 03:30:40.966907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.415 [2024-11-05 03:30:40.967047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.415 [2024-11-05 03:30:40.967063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:17.415 [2024-11-05 03:30:40.967079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:17.415 [2024-11-05 03:30:40.967091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.416 [2024-11-05 03:30:40.968431] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.416 [2024-11-05 03:30:40.972778] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 474.850 ms, result 0 00:20:17.416 [2024-11-05 03:30:40.973768] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.416 [2024-11-05 03:30:40.992576] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.793  [2024-11-05T03:30:43.313Z] Copying: 27/256 [MB] (27 MBps) [2024-11-05T03:30:44.251Z] Copying: 52/256 [MB] (24 MBps) [2024-11-05T03:30:45.187Z] Copying: 78/256 [MB] (25 MBps) [2024-11-05T03:30:46.124Z] Copying: 102/256 [MB] (24 MBps) [2024-11-05T03:30:47.061Z] Copying: 127/256 [MB] (25 MBps) [2024-11-05T03:30:47.998Z] Copying: 151/256 [MB] (23 MBps) [2024-11-05T03:30:49.407Z] Copying: 175/256 [MB] (23 MBps) [2024-11-05T03:30:50.344Z] Copying: 199/256 [MB] (23 MBps) [2024-11-05T03:30:51.293Z] Copying: 221/256 [MB] (22 MBps) [2024-11-05T03:30:51.869Z] Copying: 242/256 [MB] (21 MBps) [2024-11-05T03:30:51.869Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-05 03:30:51.579523] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:28.285 [2024-11-05 03:30:51.594540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.285 [2024-11-05 03:30:51.594749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:28.285 [2024-11-05 03:30:51.594778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:28.285 [2024-11-05 03:30:51.594808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.285 [2024-11-05 03:30:51.594847] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:28.285 [2024-11-05 03:30:51.599618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.285 [2024-11-05 03:30:51.599656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:28.285 [2024-11-05 03:30:51.599671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.757 ms 00:20:28.285 [2024-11-05 03:30:51.599683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.285 [2024-11-05 03:30:51.599944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.285 [2024-11-05 03:30:51.599959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:28.285 [2024-11-05 03:30:51.599972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:20:28.285 [2024-11-05 03:30:51.599984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.285 [2024-11-05 03:30:51.602858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.285 [2024-11-05 03:30:51.602894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:28.285 [2024-11-05 03:30:51.602907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.860 ms 00:20:28.285 [2024-11-05 03:30:51.602919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.285 [2024-11-05 03:30:51.608297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.285 [2024-11-05 03:30:51.608480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:28.285 [2024-11-05 03:30:51.608504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.364 ms 00:20:28.285 [2024-11-05 03:30:51.608517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.285 [2024-11-05 03:30:51.642147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.285 [2024-11-05 03:30:51.642186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:28.285 [2024-11-05 03:30:51.642202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.592 ms 00:20:28.285 [2024-11-05 03:30:51.642213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.285 [2024-11-05 03:30:51.663221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.285 [2024-11-05 03:30:51.663263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:28.285 [2024-11-05 03:30:51.663322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.983 ms 00:20:28.285 [2024-11-05 03:30:51.663335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.285 [2024-11-05 03:30:51.663509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.285 [2024-11-05 03:30:51.663524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:28.285 [2024-11-05 03:30:51.663537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:20:28.286 [2024-11-05 03:30:51.663550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.286 [2024-11-05 03:30:51.698809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.286 [2024-11-05 03:30:51.698849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:28.286 [2024-11-05 03:30:51.698863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.273 ms 00:20:28.286 [2024-11-05 03:30:51.698890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.286 [2024-11-05 03:30:51.734086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.286 [2024-11-05 03:30:51.734275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:28.286 [2024-11-05 03:30:51.734309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.191 ms 00:20:28.286 [2024-11-05 03:30:51.734320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.286 [2024-11-05 03:30:51.769479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.286 [2024-11-05 03:30:51.769520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:28.286 [2024-11-05 03:30:51.769536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.154 ms 00:20:28.286 [2024-11-05 03:30:51.769548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.286 [2024-11-05 03:30:51.805691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.286 [2024-11-05 03:30:51.805735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:28.286 [2024-11-05 03:30:51.805751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.104 ms 00:20:28.286 [2024-11-05 03:30:51.805779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.286 [2024-11-05 03:30:51.805840] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:28.286 [2024-11-05 03:30:51.805862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.805877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.805891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.805904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.805917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.805930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.805943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.805955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.805968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.805980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.805993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:28.286 [2024-11-05 03:30:51.806840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.806852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.806863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.806875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.806888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.806900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.806912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.806924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.806936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.806948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.806960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.806972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.806983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.806995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.807006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.807018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.807029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.807041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.807053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.807065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.807102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.807114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.807127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.807140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.807152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:28.287 [2024-11-05 03:30:51.807173] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:28.287 [2024-11-05 03:30:51.807184] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24647937-08ab-4dc6-a95c-fd93b438c7ce 00:20:28.287 [2024-11-05 03:30:51.807197] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:28.287 [2024-11-05 03:30:51.807209] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:28.287 [2024-11-05 03:30:51.807221] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:28.287 [2024-11-05 03:30:51.807233] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:28.287 [2024-11-05 03:30:51.807245] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:28.287 [2024-11-05 03:30:51.807258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:28.287 [2024-11-05 03:30:51.807279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:28.287 [2024-11-05 03:30:51.807299] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:28.287 [2024-11-05 03:30:51.807310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:28.287 [2024-11-05 03:30:51.807323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.287 [2024-11-05 03:30:51.807335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:28.287 [2024-11-05 03:30:51.807366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:20:28.287 [2024-11-05 03:30:51.807378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.287 [2024-11-05 03:30:51.828154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.287 [2024-11-05 03:30:51.828330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:28.287 [2024-11-05 03:30:51.828353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.783 ms 00:20:28.287 [2024-11-05 03:30:51.828367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.287 [2024-11-05 03:30:51.829009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.287 [2024-11-05 03:30:51.829028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:28.287 [2024-11-05 03:30:51.829042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:20:28.287 [2024-11-05 03:30:51.829053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.547 [2024-11-05 03:30:51.887711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.547 [2024-11-05 03:30:51.887756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.547 [2024-11-05 03:30:51.887772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.547 [2024-11-05 03:30:51.887786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.547 [2024-11-05 03:30:51.887913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.547 [2024-11-05 03:30:51.887928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.547 [2024-11-05 03:30:51.887941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.547 [2024-11-05 03:30:51.887953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.547 [2024-11-05 03:30:51.888015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.547 [2024-11-05 03:30:51.888031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.547 [2024-11-05 03:30:51.888044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.547 [2024-11-05 03:30:51.888057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.547 [2024-11-05 03:30:51.888090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.547 [2024-11-05 03:30:51.888104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.547 [2024-11-05 03:30:51.888116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.547 [2024-11-05 03:30:51.888128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.547 [2024-11-05 03:30:52.017737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.547 [2024-11-05 03:30:52.017835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:28.547 [2024-11-05 03:30:52.017870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.547 [2024-11-05 03:30:52.017885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.547 [2024-11-05 03:30:52.117204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.547 [2024-11-05 03:30:52.117481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:28.547 [2024-11-05 03:30:52.117511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.547 [2024-11-05 03:30:52.117527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.547 [2024-11-05 03:30:52.117640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.547 [2024-11-05 03:30:52.117655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:28.547 [2024-11-05 03:30:52.117669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.547 [2024-11-05 03:30:52.117682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.547 [2024-11-05 03:30:52.117719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.547 [2024-11-05 03:30:52.117748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:28.547 [2024-11-05 03:30:52.117761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.547 [2024-11-05 03:30:52.117773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.547 [2024-11-05 03:30:52.117959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.547 [2024-11-05 03:30:52.117976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:28.547 [2024-11-05 03:30:52.117990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.547 [2024-11-05 03:30:52.118002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.547 [2024-11-05 03:30:52.118053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.547 [2024-11-05 03:30:52.118068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:28.547 [2024-11-05 03:30:52.118094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.547 [2024-11-05 03:30:52.118107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.547 [2024-11-05 03:30:52.118161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.547 [2024-11-05 03:30:52.118176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:28.547 [2024-11-05 03:30:52.118188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.547 [2024-11-05 03:30:52.118201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.547 [2024-11-05 03:30:52.118261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.547 [2024-11-05 03:30:52.118298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:28.547 [2024-11-05 03:30:52.118313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.547 [2024-11-05 03:30:52.118326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.547 [2024-11-05 03:30:52.118528] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 524.812 ms, result 0 00:20:29.927 00:20:29.927 00:20:29.927 03:30:53 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:29.927 03:30:53 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:30.186 03:30:53 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:30.186 [2024-11-05 03:30:53.765688] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:20:30.186 [2024-11-05 03:30:53.766045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76073 ] 00:20:30.445 [2024-11-05 03:30:53.953015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.704 [2024-11-05 03:30:54.083490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.964 [2024-11-05 03:30:54.481742] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:30.964 [2024-11-05 03:30:54.481833] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.224 [2024-11-05 03:30:54.649243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.224 [2024-11-05 03:30:54.649326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:31.224 [2024-11-05 03:30:54.649346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:31.224 [2024-11-05 03:30:54.649374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.224 [2024-11-05 03:30:54.652925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.224 [2024-11-05 03:30:54.653117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.224 [2024-11-05 03:30:54.653158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.530 ms 00:20:31.224 [2024-11-05 03:30:54.653171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.224 [2024-11-05 03:30:54.653344] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:31.224 [2024-11-05 03:30:54.654412] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:31.224 [2024-11-05 03:30:54.654450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.224 [2024-11-05 03:30:54.654464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.224 [2024-11-05 03:30:54.654478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.118 ms 00:20:31.224 [2024-11-05 03:30:54.654490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.224 [2024-11-05 03:30:54.657035] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:31.224 [2024-11-05 03:30:54.676534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.224 [2024-11-05 03:30:54.676582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:31.224 [2024-11-05 03:30:54.676599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.531 ms 00:20:31.224 [2024-11-05 03:30:54.676612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.224 [2024-11-05 03:30:54.676722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.224 [2024-11-05 03:30:54.676738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:31.224 [2024-11-05 03:30:54.676751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:31.224 [2024-11-05 03:30:54.676762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.224 [2024-11-05 03:30:54.688932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.224 [2024-11-05 03:30:54.688966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.224 [2024-11-05 03:30:54.688981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.143 ms 00:20:31.224 [2024-11-05 03:30:54.688993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.224 [2024-11-05 03:30:54.689123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.224 [2024-11-05 03:30:54.689140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.224 [2024-11-05 03:30:54.689153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:31.224 [2024-11-05 03:30:54.689165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.224 [2024-11-05 03:30:54.689198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.224 [2024-11-05 03:30:54.689215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:31.224 [2024-11-05 03:30:54.689228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:31.224 [2024-11-05 03:30:54.689239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.224 [2024-11-05 03:30:54.689267] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:31.224 [2024-11-05 03:30:54.694790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.224 [2024-11-05 03:30:54.694826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.224 [2024-11-05 03:30:54.694841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.540 ms 00:20:31.224 [2024-11-05 03:30:54.694853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.224 [2024-11-05 03:30:54.694912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.224 [2024-11-05 03:30:54.694926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:31.224 [2024-11-05 03:30:54.694938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:31.224 [2024-11-05 03:30:54.694950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.224 [2024-11-05 03:30:54.694973] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:31.224 [2024-11-05 03:30:54.695006] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:31.224 [2024-11-05 03:30:54.695045] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:31.224 [2024-11-05 03:30:54.695065] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:31.224 [2024-11-05 03:30:54.695158] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:31.224 [2024-11-05 03:30:54.695173] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:31.224 [2024-11-05 03:30:54.695188] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:31.224 [2024-11-05 03:30:54.695202] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:31.224 [2024-11-05 03:30:54.695221] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:31.224 [2024-11-05 03:30:54.695234] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:31.224 [2024-11-05 03:30:54.695246] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:31.224 [2024-11-05 03:30:54.695258] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:31.224 [2024-11-05 03:30:54.695269] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:31.224 [2024-11-05 03:30:54.695282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.224 [2024-11-05 03:30:54.695316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:31.224 [2024-11-05 03:30:54.695345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:20:31.224 [2024-11-05 03:30:54.695357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.224 [2024-11-05 03:30:54.695437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.224 [2024-11-05 03:30:54.695451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:31.224 [2024-11-05 03:30:54.695468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:31.225 [2024-11-05 03:30:54.695480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.225 [2024-11-05 03:30:54.695574] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:31.225 [2024-11-05 03:30:54.695590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:31.225 [2024-11-05 03:30:54.695604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.225 [2024-11-05 03:30:54.695616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.225 [2024-11-05 03:30:54.695629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:31.225 [2024-11-05 03:30:54.695640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:31.225 [2024-11-05 03:30:54.695651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:31.225 [2024-11-05 03:30:54.695665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:31.225 [2024-11-05 03:30:54.695676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:31.225 [2024-11-05 03:30:54.695688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.225 [2024-11-05 03:30:54.695699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:31.225 [2024-11-05 03:30:54.695711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:31.225 [2024-11-05 03:30:54.695723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.225 [2024-11-05 03:30:54.695750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:31.225 [2024-11-05 03:30:54.695762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:31.225 [2024-11-05 03:30:54.695773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.225 [2024-11-05 03:30:54.695785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:31.225 [2024-11-05 03:30:54.695796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:31.225 [2024-11-05 03:30:54.695807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.225 [2024-11-05 03:30:54.695818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:31.225 [2024-11-05 03:30:54.695830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:31.225 [2024-11-05 03:30:54.695841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.225 [2024-11-05 03:30:54.695853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:31.225 [2024-11-05 03:30:54.695864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:31.225 [2024-11-05 03:30:54.695875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.225 [2024-11-05 03:30:54.695887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:31.225 [2024-11-05 03:30:54.695898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:31.225 [2024-11-05 03:30:54.695909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.225 [2024-11-05 03:30:54.695920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:31.225 [2024-11-05 03:30:54.695931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:31.225 [2024-11-05 03:30:54.695942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.225 [2024-11-05 03:30:54.695953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:31.225 [2024-11-05 03:30:54.695964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:31.225 [2024-11-05 03:30:54.695974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.225 [2024-11-05 03:30:54.695985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:31.225 [2024-11-05 03:30:54.695996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:31.225 [2024-11-05 03:30:54.696006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.225 [2024-11-05 03:30:54.696018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:31.225 [2024-11-05 03:30:54.696029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:31.225 [2024-11-05 03:30:54.696040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.225 [2024-11-05 03:30:54.696050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:31.225 [2024-11-05 03:30:54.696061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:31.225 [2024-11-05 03:30:54.696073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.225 [2024-11-05 03:30:54.696083] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:31.225 [2024-11-05 03:30:54.696096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:31.225 [2024-11-05 03:30:54.696108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.225 [2024-11-05 03:30:54.696125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.225 [2024-11-05 03:30:54.696137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:31.225 [2024-11-05 03:30:54.696149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:31.225 [2024-11-05 03:30:54.696160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:31.225 [2024-11-05 03:30:54.696170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:31.225 [2024-11-05 03:30:54.696181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:31.225 [2024-11-05 03:30:54.696192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:31.225 [2024-11-05 03:30:54.696206] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:31.225 [2024-11-05 03:30:54.696220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.225 [2024-11-05 03:30:54.696234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:31.225 [2024-11-05 03:30:54.696247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:31.225 [2024-11-05 03:30:54.696260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:31.225 [2024-11-05 03:30:54.696273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:31.225 [2024-11-05 03:30:54.696285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:31.225 [2024-11-05 03:30:54.696297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:31.225 [2024-11-05 03:30:54.696623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:31.225 [2024-11-05 03:30:54.696684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:31.225 [2024-11-05 03:30:54.696742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:31.225 [2024-11-05 03:30:54.696914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:31.225 [2024-11-05 03:30:54.696977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:31.225 [2024-11-05 03:30:54.697033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:31.225 [2024-11-05 03:30:54.697134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:31.225 [2024-11-05 03:30:54.697196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:31.225 [2024-11-05 03:30:54.697254] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:31.225 [2024-11-05 03:30:54.697334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.225 [2024-11-05 03:30:54.697480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:31.225 [2024-11-05 03:30:54.697537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:31.225 [2024-11-05 03:30:54.697636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:31.225 [2024-11-05 03:30:54.697738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:31.225 [2024-11-05 03:30:54.697833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.225 [2024-11-05 03:30:54.697874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:31.225 [2024-11-05 03:30:54.697921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.316 ms 00:20:31.225 [2024-11-05 03:30:54.697956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.225 [2024-11-05 03:30:54.746999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.225 [2024-11-05 03:30:54.747205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.225 [2024-11-05 03:30:54.747350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.955 ms 00:20:31.225 [2024-11-05 03:30:54.747371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.225 [2024-11-05 03:30:54.747550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.225 [2024-11-05 03:30:54.747567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:31.225 [2024-11-05 03:30:54.747581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:31.225 [2024-11-05 03:30:54.747593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.485 [2024-11-05 03:30:54.827453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.485 [2024-11-05 03:30:54.827672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.485 [2024-11-05 03:30:54.827705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.958 ms 00:20:31.485 [2024-11-05 03:30:54.827719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.485 [2024-11-05 03:30:54.827816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.485 [2024-11-05 03:30:54.827832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.485 [2024-11-05 03:30:54.827847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:31.485 [2024-11-05 03:30:54.827860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.485 [2024-11-05 03:30:54.828662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.485 [2024-11-05 03:30:54.828680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.485 [2024-11-05 03:30:54.828693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:20:31.485 [2024-11-05 03:30:54.828714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.485 [2024-11-05 03:30:54.828859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.485 [2024-11-05 03:30:54.828875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.485 [2024-11-05 03:30:54.828890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:20:31.485 [2024-11-05 03:30:54.828903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.485 [2024-11-05 03:30:54.852228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.485 [2024-11-05 03:30:54.852268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.485 [2024-11-05 03:30:54.852284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.334 ms 00:20:31.485 [2024-11-05 03:30:54.852330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.485 [2024-11-05 03:30:54.871242] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:31.485 [2024-11-05 03:30:54.871455] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:31.485 [2024-11-05 03:30:54.871480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.485 [2024-11-05 03:30:54.871496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:31.485 [2024-11-05 03:30:54.871510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.012 ms 00:20:31.485 [2024-11-05 03:30:54.871523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.485 [2024-11-05 03:30:54.901383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.485 [2024-11-05 03:30:54.901443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:31.485 [2024-11-05 03:30:54.901460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.811 ms 00:20:31.485 [2024-11-05 03:30:54.901489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.485 [2024-11-05 03:30:54.919312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.485 [2024-11-05 03:30:54.919354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:31.485 [2024-11-05 03:30:54.919369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.755 ms 00:20:31.485 [2024-11-05 03:30:54.919382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.485 [2024-11-05 03:30:54.937626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.485 [2024-11-05 03:30:54.937668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:31.485 [2024-11-05 03:30:54.937683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.186 ms 00:20:31.485 [2024-11-05 03:30:54.937695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.485 [2024-11-05 03:30:54.938552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.485 [2024-11-05 03:30:54.938583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:31.485 [2024-11-05 03:30:54.938598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:20:31.485 [2024-11-05 03:30:54.938610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.485 [2024-11-05 03:30:55.033166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.485 [2024-11-05 03:30:55.033270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:31.485 [2024-11-05 03:30:55.033323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.671 ms 00:20:31.485 [2024-11-05 03:30:55.033338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.485 [2024-11-05 03:30:55.043696] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:31.485 [2024-11-05 03:30:55.068260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.486 [2024-11-05 03:30:55.068331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:31.486 [2024-11-05 03:30:55.068352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.871 ms 00:20:31.486 [2024-11-05 03:30:55.068382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.486 [2024-11-05 03:30:55.068557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.486 [2024-11-05 03:30:55.068574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:31.486 [2024-11-05 03:30:55.068589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:31.486 [2024-11-05 03:30:55.068602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.486 [2024-11-05 03:30:55.068675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.486 [2024-11-05 03:30:55.068690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:31.486 [2024-11-05 03:30:55.068703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:31.486 [2024-11-05 03:30:55.068715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.486 [2024-11-05 03:30:55.068757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.486 [2024-11-05 03:30:55.068775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:31.486 [2024-11-05 03:30:55.068788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:31.486 [2024-11-05 03:30:55.068800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.486 [2024-11-05 03:30:55.068869] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:31.486 [2024-11-05 03:30:55.068885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.486 [2024-11-05 03:30:55.068898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:31.745 [2024-11-05 03:30:55.068911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:31.745 [2024-11-05 03:30:55.068925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.745 [2024-11-05 03:30:55.105553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.745 [2024-11-05 03:30:55.105599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:31.745 [2024-11-05 03:30:55.105616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.660 ms 00:20:31.745 [2024-11-05 03:30:55.105629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.745 [2024-11-05 03:30:55.105767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.745 [2024-11-05 03:30:55.105783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:31.745 [2024-11-05 03:30:55.105797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:31.745 [2024-11-05 03:30:55.105809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.745 [2024-11-05 03:30:55.107229] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:31.745 [2024-11-05 03:30:55.111422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 458.354 ms, result 0 00:20:31.745 [2024-11-05 03:30:55.112337] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:31.745 [2024-11-05 03:30:55.130170] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:32.008  [2024-11-05T03:30:55.592Z] Copying: 4096/4096 [kB] (average 20 MBps)[2024-11-05 03:30:55.330915] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:32.008 [2024-11-05 03:30:55.345355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.008 [2024-11-05 03:30:55.345516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:32.008 [2024-11-05 03:30:55.345650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:32.008 [2024-11-05 03:30:55.345703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.008 [2024-11-05 03:30:55.345760] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:32.008 [2024-11-05 03:30:55.350477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.008 [2024-11-05 03:30:55.350613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:32.008 [2024-11-05 03:30:55.350763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.609 ms 00:20:32.008 [2024-11-05 03:30:55.350806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.008 [2024-11-05 03:30:55.353012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.008 [2024-11-05 03:30:55.353157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:32.008 [2024-11-05 03:30:55.353236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:20:32.008 [2024-11-05 03:30:55.353276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.008 [2024-11-05 03:30:55.356697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.008 [2024-11-05 03:30:55.356861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:32.008 [2024-11-05 03:30:55.356943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.298 ms 00:20:32.008 [2024-11-05 03:30:55.356984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.008 [2024-11-05 03:30:55.362836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.008 [2024-11-05 03:30:55.362995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:32.008 [2024-11-05 03:30:55.363077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.782 ms 00:20:32.008 [2024-11-05 03:30:55.363117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.008 [2024-11-05 03:30:55.399950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.008 [2024-11-05 03:30:55.400119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:32.008 [2024-11-05 03:30:55.400255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.810 ms 00:20:32.008 [2024-11-05 03:30:55.400319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.008 [2024-11-05 03:30:55.421792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.008 [2024-11-05 03:30:55.421961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:32.008 [2024-11-05 03:30:55.422077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.420 ms 00:20:32.008 [2024-11-05 03:30:55.422119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.008 [2024-11-05 03:30:55.422297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.008 [2024-11-05 03:30:55.422420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:32.008 [2024-11-05 03:30:55.422496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:32.008 [2024-11-05 03:30:55.422532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.008 [2024-11-05 03:30:55.458716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.008 [2024-11-05 03:30:55.458860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:32.008 [2024-11-05 03:30:55.458955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.178 ms 00:20:32.008 [2024-11-05 03:30:55.458996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.008 [2024-11-05 03:30:55.493481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.008 [2024-11-05 03:30:55.493660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:32.008 [2024-11-05 03:30:55.493785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.419 ms 00:20:32.008 [2024-11-05 03:30:55.493826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.008 [2024-11-05 03:30:55.527386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.008 [2024-11-05 03:30:55.527548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:32.008 [2024-11-05 03:30:55.527698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.530 ms 00:20:32.008 [2024-11-05 03:30:55.527739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.008 [2024-11-05 03:30:55.561436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.008 [2024-11-05 03:30:55.561599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:32.008 [2024-11-05 03:30:55.561755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.638 ms 00:20:32.008 [2024-11-05 03:30:55.561797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.008 [2024-11-05 03:30:55.561880] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:32.008 [2024-11-05 03:30:55.561984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:32.008 [2024-11-05 03:30:55.562048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.562989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:32.009 [2024-11-05 03:30:55.563882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:32.010 [2024-11-05 03:30:55.563895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:32.010 [2024-11-05 03:30:55.563907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:32.010 [2024-11-05 03:30:55.563919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:32.010 [2024-11-05 03:30:55.563947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:32.010 [2024-11-05 03:30:55.563960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:32.010 [2024-11-05 03:30:55.563973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:32.010 [2024-11-05 03:30:55.563987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:32.010 [2024-11-05 03:30:55.564000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:32.010 [2024-11-05 03:30:55.564022] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:32.010 [2024-11-05 03:30:55.564036] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24647937-08ab-4dc6-a95c-fd93b438c7ce 00:20:32.010 [2024-11-05 03:30:55.564060] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:32.010 [2024-11-05 03:30:55.564071] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:32.010 [2024-11-05 03:30:55.564084] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:32.010 [2024-11-05 03:30:55.564096] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:32.010 [2024-11-05 03:30:55.564108] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:32.010 [2024-11-05 03:30:55.564120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:32.010 [2024-11-05 03:30:55.564132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:32.010 [2024-11-05 03:30:55.564144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:32.010 [2024-11-05 03:30:55.564155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:32.010 [2024-11-05 03:30:55.564167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.010 [2024-11-05 03:30:55.564186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:32.010 [2024-11-05 03:30:55.564199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.293 ms 00:20:32.010 [2024-11-05 03:30:55.564211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.010 [2024-11-05 03:30:55.585096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.010 [2024-11-05 03:30:55.585250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:32.010 [2024-11-05 03:30:55.585391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.890 ms 00:20:32.010 [2024-11-05 03:30:55.585434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.010 [2024-11-05 03:30:55.586086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.010 [2024-11-05 03:30:55.586199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:32.010 [2024-11-05 03:30:55.586279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:20:32.010 [2024-11-05 03:30:55.586335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.278 [2024-11-05 03:30:55.642323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.278 [2024-11-05 03:30:55.642506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.278 [2024-11-05 03:30:55.642595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.278 [2024-11-05 03:30:55.642638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.278 [2024-11-05 03:30:55.642767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.278 [2024-11-05 03:30:55.642810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.278 [2024-11-05 03:30:55.642846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.278 [2024-11-05 03:30:55.642924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.278 [2024-11-05 03:30:55.643021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.278 [2024-11-05 03:30:55.643063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.278 [2024-11-05 03:30:55.643100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.278 [2024-11-05 03:30:55.643135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.278 [2024-11-05 03:30:55.643183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.279 [2024-11-05 03:30:55.643305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.279 [2024-11-05 03:30:55.643347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.279 [2024-11-05 03:30:55.643383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.279 [2024-11-05 03:30:55.778577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.279 [2024-11-05 03:30:55.778936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:32.279 [2024-11-05 03:30:55.779098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.279 [2024-11-05 03:30:55.779143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.538 [2024-11-05 03:30:55.885833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.538 [2024-11-05 03:30:55.886185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:32.538 [2024-11-05 03:30:55.886308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.538 [2024-11-05 03:30:55.886355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.538 [2024-11-05 03:30:55.886538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.538 [2024-11-05 03:30:55.886646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:32.538 [2024-11-05 03:30:55.886691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.538 [2024-11-05 03:30:55.886741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.538 [2024-11-05 03:30:55.886856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.538 [2024-11-05 03:30:55.886981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:32.538 [2024-11-05 03:30:55.887064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.538 [2024-11-05 03:30:55.887103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.538 [2024-11-05 03:30:55.887268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.538 [2024-11-05 03:30:55.887307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:32.538 [2024-11-05 03:30:55.887323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.538 [2024-11-05 03:30:55.887336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.538 [2024-11-05 03:30:55.887394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.538 [2024-11-05 03:30:55.887409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:32.538 [2024-11-05 03:30:55.887423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.538 [2024-11-05 03:30:55.887443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.538 [2024-11-05 03:30:55.887497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.538 [2024-11-05 03:30:55.887513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:32.538 [2024-11-05 03:30:55.887526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.538 [2024-11-05 03:30:55.887538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.538 [2024-11-05 03:30:55.887598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.538 [2024-11-05 03:30:55.887613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:32.538 [2024-11-05 03:30:55.887632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.538 [2024-11-05 03:30:55.887645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.538 [2024-11-05 03:30:55.887833] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 543.336 ms, result 0 00:20:33.476 00:20:33.476 00:20:33.476 03:30:57 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76110 00:20:33.476 03:30:57 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:33.476 03:30:57 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76110 00:20:33.476 03:30:57 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 76110 ']' 00:20:33.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.476 03:30:57 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.476 03:30:57 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:33.476 03:30:57 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.476 03:30:57 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:33.476 03:30:57 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:33.735 [2024-11-05 03:30:57.158801] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:20:33.735 [2024-11-05 03:30:57.159242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76110 ] 00:20:33.994 [2024-11-05 03:30:57.350129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.994 [2024-11-05 03:30:57.496682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.373 03:30:58 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:35.373 03:30:58 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:20:35.373 03:30:58 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:35.373 [2024-11-05 03:30:58.779851] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:35.373 [2024-11-05 03:30:58.779939] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:35.633 [2024-11-05 03:30:58.974629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.633 [2024-11-05 03:30:58.974720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:35.633 [2024-11-05 03:30:58.974764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:35.633 [2024-11-05 03:30:58.974780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.633 [2024-11-05 03:30:58.978904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.633 [2024-11-05 03:30:58.978953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:35.633 [2024-11-05 03:30:58.978972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.102 ms 00:20:35.633 [2024-11-05 03:30:58.978986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.633 [2024-11-05 03:30:58.979113] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:35.633 [2024-11-05 03:30:58.980161] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:35.633 [2024-11-05 03:30:58.980202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.633 [2024-11-05 03:30:58.980216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:35.633 [2024-11-05 03:30:58.980232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:20:35.633 [2024-11-05 03:30:58.980246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.633 [2024-11-05 03:30:58.982684] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:35.633 [2024-11-05 03:30:59.004148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.633 [2024-11-05 03:30:59.004205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:35.633 [2024-11-05 03:30:59.004240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.504 ms 00:20:35.633 [2024-11-05 03:30:59.004261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.633 [2024-11-05 03:30:59.004399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.633 [2024-11-05 03:30:59.004436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:35.633 [2024-11-05 03:30:59.004451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:35.633 [2024-11-05 03:30:59.004470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.633 [2024-11-05 03:30:59.016544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.633 [2024-11-05 03:30:59.016597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:35.633 [2024-11-05 03:30:59.016613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.024 ms 00:20:35.633 [2024-11-05 03:30:59.016634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.633 [2024-11-05 03:30:59.016827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.633 [2024-11-05 03:30:59.016852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:35.633 [2024-11-05 03:30:59.016867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:20:35.633 [2024-11-05 03:30:59.016888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.633 [2024-11-05 03:30:59.016939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.633 [2024-11-05 03:30:59.016961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:35.633 [2024-11-05 03:30:59.016975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:35.633 [2024-11-05 03:30:59.016995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.634 [2024-11-05 03:30:59.017028] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:35.634 [2024-11-05 03:30:59.022659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.634 [2024-11-05 03:30:59.022706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:35.634 [2024-11-05 03:30:59.022739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.641 ms 00:20:35.634 [2024-11-05 03:30:59.022760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.634 [2024-11-05 03:30:59.022852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.634 [2024-11-05 03:30:59.022867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:35.634 [2024-11-05 03:30:59.022887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:35.634 [2024-11-05 03:30:59.022907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.634 [2024-11-05 03:30:59.022943] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:35.634 [2024-11-05 03:30:59.022974] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:35.634 [2024-11-05 03:30:59.023036] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:35.634 [2024-11-05 03:30:59.023060] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:35.634 [2024-11-05 03:30:59.023180] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:35.634 [2024-11-05 03:30:59.023197] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:35.634 [2024-11-05 03:30:59.023231] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:35.634 [2024-11-05 03:30:59.023248] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:35.634 [2024-11-05 03:30:59.023270] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:35.634 [2024-11-05 03:30:59.023302] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:35.634 [2024-11-05 03:30:59.023324] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:35.634 [2024-11-05 03:30:59.023337] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:35.634 [2024-11-05 03:30:59.023362] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:35.634 [2024-11-05 03:30:59.023377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.634 [2024-11-05 03:30:59.023396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:35.634 [2024-11-05 03:30:59.023410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:20:35.634 [2024-11-05 03:30:59.023429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.634 [2024-11-05 03:30:59.023517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.634 [2024-11-05 03:30:59.023538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:35.634 [2024-11-05 03:30:59.023552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:35.634 [2024-11-05 03:30:59.023570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.634 [2024-11-05 03:30:59.023686] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:35.634 [2024-11-05 03:30:59.023711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:35.634 [2024-11-05 03:30:59.023726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.634 [2024-11-05 03:30:59.023746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.634 [2024-11-05 03:30:59.023760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:35.634 [2024-11-05 03:30:59.023778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:35.634 [2024-11-05 03:30:59.023791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:35.634 [2024-11-05 03:30:59.023818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:35.634 [2024-11-05 03:30:59.023831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:35.634 [2024-11-05 03:30:59.023849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.634 [2024-11-05 03:30:59.023862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:35.634 [2024-11-05 03:30:59.023879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:35.634 [2024-11-05 03:30:59.023892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.634 [2024-11-05 03:30:59.023910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:35.634 [2024-11-05 03:30:59.023923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:35.634 [2024-11-05 03:30:59.023943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.634 [2024-11-05 03:30:59.023956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:35.634 [2024-11-05 03:30:59.023975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:35.634 [2024-11-05 03:30:59.023987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.634 [2024-11-05 03:30:59.024005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:35.634 [2024-11-05 03:30:59.024032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:35.634 [2024-11-05 03:30:59.024051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.634 [2024-11-05 03:30:59.024063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:35.634 [2024-11-05 03:30:59.024087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:35.634 [2024-11-05 03:30:59.024100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.634 [2024-11-05 03:30:59.024118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:35.634 [2024-11-05 03:30:59.024130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:35.634 [2024-11-05 03:30:59.024149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.634 [2024-11-05 03:30:59.024162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:35.634 [2024-11-05 03:30:59.024180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:35.634 [2024-11-05 03:30:59.024192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.634 [2024-11-05 03:30:59.024210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:35.634 [2024-11-05 03:30:59.024223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:35.634 [2024-11-05 03:30:59.024242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.634 [2024-11-05 03:30:59.024254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:35.634 [2024-11-05 03:30:59.024272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:35.634 [2024-11-05 03:30:59.024503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.634 [2024-11-05 03:30:59.024578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:35.634 [2024-11-05 03:30:59.024623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:35.634 [2024-11-05 03:30:59.024673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.634 [2024-11-05 03:30:59.024711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:35.634 [2024-11-05 03:30:59.024755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:35.634 [2024-11-05 03:30:59.024858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.634 [2024-11-05 03:30:59.024912] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:35.634 [2024-11-05 03:30:59.024959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:35.634 [2024-11-05 03:30:59.025004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.634 [2024-11-05 03:30:59.025043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.634 [2024-11-05 03:30:59.025203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:35.634 [2024-11-05 03:30:59.025242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:35.634 [2024-11-05 03:30:59.025297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:35.634 [2024-11-05 03:30:59.025388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:35.634 [2024-11-05 03:30:59.025439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:35.634 [2024-11-05 03:30:59.025479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:35.634 [2024-11-05 03:30:59.025717] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:35.634 [2024-11-05 03:30:59.025787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.634 [2024-11-05 03:30:59.025859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:35.634 [2024-11-05 03:30:59.026103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:35.634 [2024-11-05 03:30:59.026171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:35.634 [2024-11-05 03:30:59.026228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:35.634 [2024-11-05 03:30:59.026465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:35.634 [2024-11-05 03:30:59.026525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:35.634 [2024-11-05 03:30:59.026587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:35.634 [2024-11-05 03:30:59.026671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:35.634 [2024-11-05 03:30:59.026692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:35.634 [2024-11-05 03:30:59.026716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:35.634 [2024-11-05 03:30:59.026732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:35.634 [2024-11-05 03:30:59.026745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:35.634 [2024-11-05 03:30:59.026762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:35.634 [2024-11-05 03:30:59.026775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:35.634 [2024-11-05 03:30:59.026792] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:35.634 [2024-11-05 03:30:59.026806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.635 [2024-11-05 03:30:59.026828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:35.635 [2024-11-05 03:30:59.026842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:35.635 [2024-11-05 03:30:59.026858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:35.635 [2024-11-05 03:30:59.026873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:35.635 [2024-11-05 03:30:59.026893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.635 [2024-11-05 03:30:59.026907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:35.635 [2024-11-05 03:30:59.026926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.264 ms 00:20:35.635 [2024-11-05 03:30:59.026939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.635 [2024-11-05 03:30:59.078346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.635 [2024-11-05 03:30:59.078584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:35.635 [2024-11-05 03:30:59.078620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.361 ms 00:20:35.635 [2024-11-05 03:30:59.078639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.635 [2024-11-05 03:30:59.078838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.635 [2024-11-05 03:30:59.078854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:35.635 [2024-11-05 03:30:59.078872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:35.635 [2024-11-05 03:30:59.078885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.635 [2024-11-05 03:30:59.135923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.635 [2024-11-05 03:30:59.135980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.635 [2024-11-05 03:30:59.136005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.088 ms 00:20:35.635 [2024-11-05 03:30:59.136019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.635 [2024-11-05 03:30:59.136130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.635 [2024-11-05 03:30:59.136145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:35.635 [2024-11-05 03:30:59.136166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:35.635 [2024-11-05 03:30:59.136180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.635 [2024-11-05 03:30:59.136985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.635 [2024-11-05 03:30:59.137010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:35.635 [2024-11-05 03:30:59.137039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:20:35.635 [2024-11-05 03:30:59.137052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.635 [2024-11-05 03:30:59.137208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.635 [2024-11-05 03:30:59.137225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:35.635 [2024-11-05 03:30:59.137245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:20:35.635 [2024-11-05 03:30:59.137258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.635 [2024-11-05 03:30:59.165568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.635 [2024-11-05 03:30:59.165615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:35.635 [2024-11-05 03:30:59.165637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.322 ms 00:20:35.635 [2024-11-05 03:30:59.165650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.635 [2024-11-05 03:30:59.186953] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:35.635 [2024-11-05 03:30:59.187015] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:35.635 [2024-11-05 03:30:59.187039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.635 [2024-11-05 03:30:59.187053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:35.635 [2024-11-05 03:30:59.187072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.274 ms 00:20:35.635 [2024-11-05 03:30:59.187085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.894 [2024-11-05 03:30:59.217680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.894 [2024-11-05 03:30:59.217726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:35.894 [2024-11-05 03:30:59.217748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.513 ms 00:20:35.894 [2024-11-05 03:30:59.217761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.894 [2024-11-05 03:30:59.236430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.894 [2024-11-05 03:30:59.236475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:35.894 [2024-11-05 03:30:59.236499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.592 ms 00:20:35.894 [2024-11-05 03:30:59.236512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.894 [2024-11-05 03:30:59.254918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.894 [2024-11-05 03:30:59.254962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:35.894 [2024-11-05 03:30:59.254988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.339 ms 00:20:35.894 [2024-11-05 03:30:59.255001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.894 [2024-11-05 03:30:59.255899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.894 [2024-11-05 03:30:59.255936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:35.894 [2024-11-05 03:30:59.255956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:20:35.894 [2024-11-05 03:30:59.255969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.894 [2024-11-05 03:30:59.368256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.894 [2024-11-05 03:30:59.368357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:35.894 [2024-11-05 03:30:59.368390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.416 ms 00:20:35.894 [2024-11-05 03:30:59.368406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.895 [2024-11-05 03:30:59.379669] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:35.895 [2024-11-05 03:30:59.405973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.895 [2024-11-05 03:30:59.406061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:35.895 [2024-11-05 03:30:59.406091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.490 ms 00:20:35.895 [2024-11-05 03:30:59.406111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.895 [2024-11-05 03:30:59.406323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.895 [2024-11-05 03:30:59.406364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:35.895 [2024-11-05 03:30:59.406380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:35.895 [2024-11-05 03:30:59.406401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.895 [2024-11-05 03:30:59.406495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.895 [2024-11-05 03:30:59.406517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:35.895 [2024-11-05 03:30:59.406532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:35.895 [2024-11-05 03:30:59.406551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.895 [2024-11-05 03:30:59.406594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.895 [2024-11-05 03:30:59.406615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:35.895 [2024-11-05 03:30:59.406628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:35.895 [2024-11-05 03:30:59.406646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.895 [2024-11-05 03:30:59.406710] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:35.895 [2024-11-05 03:30:59.406755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.895 [2024-11-05 03:30:59.406768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:35.895 [2024-11-05 03:30:59.406797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:35.895 [2024-11-05 03:30:59.406810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.895 [2024-11-05 03:30:59.444963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.895 [2024-11-05 03:30:59.445015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:35.895 [2024-11-05 03:30:59.445041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.161 ms 00:20:35.895 [2024-11-05 03:30:59.445054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.895 [2024-11-05 03:30:59.445224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.895 [2024-11-05 03:30:59.445240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:35.895 [2024-11-05 03:30:59.445261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:35.895 [2024-11-05 03:30:59.445282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.895 [2024-11-05 03:30:59.446766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:35.895 [2024-11-05 03:30:59.451211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 472.466 ms, result 0 00:20:35.895 [2024-11-05 03:30:59.452613] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:36.159 Some configs were skipped because the RPC state that can call them passed over. 00:20:36.159 03:30:59 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:36.159 [2024-11-05 03:30:59.708621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.159 [2024-11-05 03:30:59.708916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:36.159 [2024-11-05 03:30:59.709093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.870 ms 00:20:36.159 [2024-11-05 03:30:59.709147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.159 [2024-11-05 03:30:59.709230] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.487 ms, result 0 00:20:36.159 true 00:20:36.159 03:30:59 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:36.418 [2024-11-05 03:30:59.924123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.418 [2024-11-05 03:30:59.924439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:36.418 [2024-11-05 03:30:59.924571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:20:36.418 [2024-11-05 03:30:59.924619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.418 [2024-11-05 03:30:59.924724] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.028 ms, result 0 00:20:36.418 true 00:20:36.418 03:30:59 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76110 00:20:36.418 03:30:59 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 76110 ']' 00:20:36.418 03:30:59 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 76110 00:20:36.418 03:30:59 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:20:36.418 03:30:59 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.419 03:30:59 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76110 00:20:36.419 killing process with pid 76110 00:20:36.419 03:30:59 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:36.419 03:30:59 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:36.419 03:30:59 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76110' 00:20:36.419 03:30:59 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 76110 00:20:36.419 03:30:59 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 76110 00:20:37.799 [2024-11-05 03:31:01.219999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.799 [2024-11-05 03:31:01.220088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.799 [2024-11-05 03:31:01.220110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:37.799 [2024-11-05 03:31:01.220126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.799 [2024-11-05 03:31:01.220157] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:37.799 [2024-11-05 03:31:01.224954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.799 [2024-11-05 03:31:01.224999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.799 [2024-11-05 03:31:01.225022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.777 ms 00:20:37.799 [2024-11-05 03:31:01.225035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.799 [2024-11-05 03:31:01.225359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.799 [2024-11-05 03:31:01.225378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.799 [2024-11-05 03:31:01.225395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:20:37.799 [2024-11-05 03:31:01.225408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.799 [2024-11-05 03:31:01.228930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.799 [2024-11-05 03:31:01.228977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.799 [2024-11-05 03:31:01.229000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.498 ms 00:20:37.799 [2024-11-05 03:31:01.229013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.799 [2024-11-05 03:31:01.234662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.799 [2024-11-05 03:31:01.234715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.799 [2024-11-05 03:31:01.234751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.604 ms 00:20:37.799 [2024-11-05 03:31:01.234764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.799 [2024-11-05 03:31:01.250798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.799 [2024-11-05 03:31:01.250839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:37.799 [2024-11-05 03:31:01.250863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.977 ms 00:20:37.799 [2024-11-05 03:31:01.250889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.799 [2024-11-05 03:31:01.262515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.799 [2024-11-05 03:31:01.262560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:37.799 [2024-11-05 03:31:01.262586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.552 ms 00:20:37.799 [2024-11-05 03:31:01.262599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.799 [2024-11-05 03:31:01.262784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.799 [2024-11-05 03:31:01.262801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:37.799 [2024-11-05 03:31:01.262818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:37.799 [2024-11-05 03:31:01.262831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.799 [2024-11-05 03:31:01.279238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.799 [2024-11-05 03:31:01.279280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:37.799 [2024-11-05 03:31:01.279324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.402 ms 00:20:37.800 [2024-11-05 03:31:01.279337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.800 [2024-11-05 03:31:01.294839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.800 [2024-11-05 03:31:01.294882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:37.800 [2024-11-05 03:31:01.294911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.453 ms 00:20:37.800 [2024-11-05 03:31:01.294924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.800 [2024-11-05 03:31:01.309903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.800 [2024-11-05 03:31:01.309942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:37.800 [2024-11-05 03:31:01.309981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.929 ms 00:20:37.800 [2024-11-05 03:31:01.309994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.800 [2024-11-05 03:31:01.324514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.800 [2024-11-05 03:31:01.324555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:37.800 [2024-11-05 03:31:01.324579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.435 ms 00:20:37.800 [2024-11-05 03:31:01.324592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.800 [2024-11-05 03:31:01.324657] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:37.800 [2024-11-05 03:31:01.324678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.324998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:37.800 [2024-11-05 03:31:01.325945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.325959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.325978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.325992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:37.801 [2024-11-05 03:31:01.326455] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:37.801 [2024-11-05 03:31:01.326489] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24647937-08ab-4dc6-a95c-fd93b438c7ce 00:20:37.801 [2024-11-05 03:31:01.326519] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:37.801 [2024-11-05 03:31:01.326550] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:37.801 [2024-11-05 03:31:01.326563] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:37.801 [2024-11-05 03:31:01.326583] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:37.801 [2024-11-05 03:31:01.326596] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:37.801 [2024-11-05 03:31:01.326616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:37.801 [2024-11-05 03:31:01.326628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:37.801 [2024-11-05 03:31:01.326646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:37.801 [2024-11-05 03:31:01.326658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:37.801 [2024-11-05 03:31:01.326677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.801 [2024-11-05 03:31:01.326691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:37.801 [2024-11-05 03:31:01.326721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.026 ms 00:20:37.801 [2024-11-05 03:31:01.326733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.801 [2024-11-05 03:31:01.348655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.801 [2024-11-05 03:31:01.348698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:37.801 [2024-11-05 03:31:01.348728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.910 ms 00:20:37.801 [2024-11-05 03:31:01.348741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.801 [2024-11-05 03:31:01.349441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.801 [2024-11-05 03:31:01.349462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:37.801 [2024-11-05 03:31:01.349483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:20:37.801 [2024-11-05 03:31:01.349503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-05 03:31:01.424801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-05 03:31:01.424850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:38.060 [2024-11-05 03:31:01.424875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-05 03:31:01.424889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-05 03:31:01.425045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-05 03:31:01.425061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:38.060 [2024-11-05 03:31:01.425082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-05 03:31:01.425103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-05 03:31:01.425180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-05 03:31:01.425196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:38.060 [2024-11-05 03:31:01.425223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-05 03:31:01.425236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-05 03:31:01.425268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-05 03:31:01.425281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:38.060 [2024-11-05 03:31:01.425323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-05 03:31:01.425337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.060 [2024-11-05 03:31:01.560684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.060 [2024-11-05 03:31:01.560759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:38.060 [2024-11-05 03:31:01.560784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.060 [2024-11-05 03:31:01.560798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.320 [2024-11-05 03:31:01.662518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.320 [2024-11-05 03:31:01.662596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.320 [2024-11-05 03:31:01.662636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.320 [2024-11-05 03:31:01.662655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.320 [2024-11-05 03:31:01.662831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.320 [2024-11-05 03:31:01.662846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.320 [2024-11-05 03:31:01.662868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.320 [2024-11-05 03:31:01.662881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.320 [2024-11-05 03:31:01.662922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.320 [2024-11-05 03:31:01.662936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.320 [2024-11-05 03:31:01.662953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.320 [2024-11-05 03:31:01.662966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.320 [2024-11-05 03:31:01.663101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.320 [2024-11-05 03:31:01.663118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.320 [2024-11-05 03:31:01.663135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.320 [2024-11-05 03:31:01.663147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.320 [2024-11-05 03:31:01.663202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.320 [2024-11-05 03:31:01.663217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:38.320 [2024-11-05 03:31:01.663233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.320 [2024-11-05 03:31:01.663247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.320 [2024-11-05 03:31:01.663347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.320 [2024-11-05 03:31:01.663366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.320 [2024-11-05 03:31:01.663388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.320 [2024-11-05 03:31:01.663401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.320 [2024-11-05 03:31:01.663468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.320 [2024-11-05 03:31:01.663483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.320 [2024-11-05 03:31:01.663500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.320 [2024-11-05 03:31:01.663514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.320 [2024-11-05 03:31:01.663701] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 444.388 ms, result 0 00:20:39.258 03:31:02 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:39.517 [2024-11-05 03:31:02.867664] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:20:39.517 [2024-11-05 03:31:02.867846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76189 ] 00:20:39.517 [2024-11-05 03:31:03.059638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.776 [2024-11-05 03:31:03.196141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.352 [2024-11-05 03:31:03.622771] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.352 [2024-11-05 03:31:03.622861] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.352 [2024-11-05 03:31:03.792438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.352 [2024-11-05 03:31:03.792505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:40.352 [2024-11-05 03:31:03.792525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:40.352 [2024-11-05 03:31:03.792538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.352 [2024-11-05 03:31:03.796084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.352 [2024-11-05 03:31:03.796303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.352 [2024-11-05 03:31:03.796329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.525 ms 00:20:40.352 [2024-11-05 03:31:03.796343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.352 [2024-11-05 03:31:03.796506] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:40.352 [2024-11-05 03:31:03.797543] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:40.352 [2024-11-05 03:31:03.797583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.352 [2024-11-05 03:31:03.797597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.352 [2024-11-05 03:31:03.797611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:20:40.352 [2024-11-05 03:31:03.797624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.352 [2024-11-05 03:31:03.800187] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:40.352 [2024-11-05 03:31:03.820785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.352 [2024-11-05 03:31:03.820836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:40.352 [2024-11-05 03:31:03.820854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.633 ms 00:20:40.352 [2024-11-05 03:31:03.820884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.352 [2024-11-05 03:31:03.820999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.352 [2024-11-05 03:31:03.821017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:40.352 [2024-11-05 03:31:03.821031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:40.352 [2024-11-05 03:31:03.821044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.352 [2024-11-05 03:31:03.833393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.352 [2024-11-05 03:31:03.833427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.352 [2024-11-05 03:31:03.833443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.319 ms 00:20:40.352 [2024-11-05 03:31:03.833457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.352 [2024-11-05 03:31:03.833593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.352 [2024-11-05 03:31:03.833612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.352 [2024-11-05 03:31:03.833627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:40.352 [2024-11-05 03:31:03.833639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.352 [2024-11-05 03:31:03.833674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.352 [2024-11-05 03:31:03.833692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:40.352 [2024-11-05 03:31:03.833706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:40.352 [2024-11-05 03:31:03.833718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.352 [2024-11-05 03:31:03.833748] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:40.352 [2024-11-05 03:31:03.839673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.352 [2024-11-05 03:31:03.839712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.352 [2024-11-05 03:31:03.839728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.942 ms 00:20:40.352 [2024-11-05 03:31:03.839740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.352 [2024-11-05 03:31:03.839802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.352 [2024-11-05 03:31:03.839816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:40.352 [2024-11-05 03:31:03.839830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:40.352 [2024-11-05 03:31:03.839842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.352 [2024-11-05 03:31:03.839868] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:40.352 [2024-11-05 03:31:03.839904] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:40.352 [2024-11-05 03:31:03.839947] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:40.352 [2024-11-05 03:31:03.839970] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:40.352 [2024-11-05 03:31:03.840069] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:40.352 [2024-11-05 03:31:03.840086] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:40.352 [2024-11-05 03:31:03.840104] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:40.352 [2024-11-05 03:31:03.840120] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:40.352 [2024-11-05 03:31:03.840140] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:40.352 [2024-11-05 03:31:03.840167] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:40.352 [2024-11-05 03:31:03.840180] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:40.352 [2024-11-05 03:31:03.840193] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:40.352 [2024-11-05 03:31:03.840207] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:40.352 [2024-11-05 03:31:03.840219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.352 [2024-11-05 03:31:03.840232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:40.352 [2024-11-05 03:31:03.840245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:20:40.352 [2024-11-05 03:31:03.840258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.352 [2024-11-05 03:31:03.840362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.352 [2024-11-05 03:31:03.840378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:40.352 [2024-11-05 03:31:03.840397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:40.352 [2024-11-05 03:31:03.840410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.352 [2024-11-05 03:31:03.840506] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:40.352 [2024-11-05 03:31:03.840523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:40.352 [2024-11-05 03:31:03.840536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.352 [2024-11-05 03:31:03.840549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.352 [2024-11-05 03:31:03.840562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:40.352 [2024-11-05 03:31:03.840574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:40.352 [2024-11-05 03:31:03.840586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:40.352 [2024-11-05 03:31:03.840600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:40.352 [2024-11-05 03:31:03.840612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:40.352 [2024-11-05 03:31:03.840623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.352 [2024-11-05 03:31:03.840635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:40.352 [2024-11-05 03:31:03.840646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:40.352 [2024-11-05 03:31:03.840659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.352 [2024-11-05 03:31:03.840685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:40.353 [2024-11-05 03:31:03.840697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:40.353 [2024-11-05 03:31:03.840709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.353 [2024-11-05 03:31:03.840721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:40.353 [2024-11-05 03:31:03.840733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:40.353 [2024-11-05 03:31:03.840745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.353 [2024-11-05 03:31:03.840756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:40.353 [2024-11-05 03:31:03.840768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:40.353 [2024-11-05 03:31:03.840779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.353 [2024-11-05 03:31:03.840792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:40.353 [2024-11-05 03:31:03.840803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:40.353 [2024-11-05 03:31:03.840815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.353 [2024-11-05 03:31:03.840826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:40.353 [2024-11-05 03:31:03.840838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:40.353 [2024-11-05 03:31:03.840850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.353 [2024-11-05 03:31:03.840861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:40.353 [2024-11-05 03:31:03.840872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:40.353 [2024-11-05 03:31:03.840883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.353 [2024-11-05 03:31:03.840894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:40.353 [2024-11-05 03:31:03.840906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:40.353 [2024-11-05 03:31:03.840917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.353 [2024-11-05 03:31:03.840928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:40.353 [2024-11-05 03:31:03.840939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:40.353 [2024-11-05 03:31:03.840950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.353 [2024-11-05 03:31:03.840962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:40.353 [2024-11-05 03:31:03.840973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:40.353 [2024-11-05 03:31:03.840984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.353 [2024-11-05 03:31:03.840995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:40.353 [2024-11-05 03:31:03.841006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:40.353 [2024-11-05 03:31:03.841018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.353 [2024-11-05 03:31:03.841030] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:40.353 [2024-11-05 03:31:03.841044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:40.353 [2024-11-05 03:31:03.841056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.353 [2024-11-05 03:31:03.841073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.353 [2024-11-05 03:31:03.841086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:40.353 [2024-11-05 03:31:03.841098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:40.353 [2024-11-05 03:31:03.841110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:40.353 [2024-11-05 03:31:03.841121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:40.353 [2024-11-05 03:31:03.841133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:40.353 [2024-11-05 03:31:03.841144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:40.353 [2024-11-05 03:31:03.841158] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:40.353 [2024-11-05 03:31:03.841173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.353 [2024-11-05 03:31:03.841188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:40.353 [2024-11-05 03:31:03.841201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:40.353 [2024-11-05 03:31:03.841214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:40.353 [2024-11-05 03:31:03.841227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:40.353 [2024-11-05 03:31:03.841241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:40.353 [2024-11-05 03:31:03.841253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:40.353 [2024-11-05 03:31:03.841266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:40.353 [2024-11-05 03:31:03.841279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:40.353 [2024-11-05 03:31:03.841309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:40.353 [2024-11-05 03:31:03.841322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:40.353 [2024-11-05 03:31:03.841335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:40.353 [2024-11-05 03:31:03.841347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:40.353 [2024-11-05 03:31:03.841360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:40.353 [2024-11-05 03:31:03.841373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:40.353 [2024-11-05 03:31:03.841386] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:40.353 [2024-11-05 03:31:03.841400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.353 [2024-11-05 03:31:03.841414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:40.353 [2024-11-05 03:31:03.841427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:40.353 [2024-11-05 03:31:03.841440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:40.353 [2024-11-05 03:31:03.841452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:40.353 [2024-11-05 03:31:03.841465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.353 [2024-11-05 03:31:03.841480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:40.353 [2024-11-05 03:31:03.841508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:20:40.353 [2024-11-05 03:31:03.841520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.353 [2024-11-05 03:31:03.891718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.353 [2024-11-05 03:31:03.891768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.353 [2024-11-05 03:31:03.891785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.213 ms 00:20:40.353 [2024-11-05 03:31:03.891800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.353 [2024-11-05 03:31:03.891983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.353 [2024-11-05 03:31:03.892000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:40.353 [2024-11-05 03:31:03.892015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:40.353 [2024-11-05 03:31:03.892027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.620 [2024-11-05 03:31:03.960984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.620 [2024-11-05 03:31:03.961218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.620 [2024-11-05 03:31:03.961252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.037 ms 00:20:40.620 [2024-11-05 03:31:03.961266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.620 [2024-11-05 03:31:03.961385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.620 [2024-11-05 03:31:03.961403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.620 [2024-11-05 03:31:03.961418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:40.621 [2024-11-05 03:31:03.961431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.621 [2024-11-05 03:31:03.962160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.621 [2024-11-05 03:31:03.962177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.621 [2024-11-05 03:31:03.962190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:20:40.621 [2024-11-05 03:31:03.962212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.621 [2024-11-05 03:31:03.962367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.621 [2024-11-05 03:31:03.962385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.621 [2024-11-05 03:31:03.962399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:20:40.621 [2024-11-05 03:31:03.962411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.621 [2024-11-05 03:31:03.987081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.621 [2024-11-05 03:31:03.987124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.621 [2024-11-05 03:31:03.987141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.680 ms 00:20:40.621 [2024-11-05 03:31:03.987155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.621 [2024-11-05 03:31:04.007859] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:40.621 [2024-11-05 03:31:04.007905] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:40.621 [2024-11-05 03:31:04.007923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.621 [2024-11-05 03:31:04.007935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:40.621 [2024-11-05 03:31:04.007950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.639 ms 00:20:40.621 [2024-11-05 03:31:04.007962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.621 [2024-11-05 03:31:04.038997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.621 [2024-11-05 03:31:04.039058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:40.621 [2024-11-05 03:31:04.039075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.986 ms 00:20:40.621 [2024-11-05 03:31:04.039088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.621 [2024-11-05 03:31:04.057859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.621 [2024-11-05 03:31:04.057901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:40.621 [2024-11-05 03:31:04.057917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.704 ms 00:20:40.621 [2024-11-05 03:31:04.057930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.621 [2024-11-05 03:31:04.076739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.621 [2024-11-05 03:31:04.076918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:40.621 [2024-11-05 03:31:04.076942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.749 ms 00:20:40.621 [2024-11-05 03:31:04.076954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.621 [2024-11-05 03:31:04.077811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.621 [2024-11-05 03:31:04.077842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:40.621 [2024-11-05 03:31:04.077857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:20:40.621 [2024-11-05 03:31:04.077870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.621 [2024-11-05 03:31:04.175990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.621 [2024-11-05 03:31:04.176073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:40.621 [2024-11-05 03:31:04.176095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.240 ms 00:20:40.621 [2024-11-05 03:31:04.176109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.621 [2024-11-05 03:31:04.187308] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:40.880 [2024-11-05 03:31:04.212590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.880 [2024-11-05 03:31:04.212656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:40.880 [2024-11-05 03:31:04.212693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.413 ms 00:20:40.880 [2024-11-05 03:31:04.212707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.880 [2024-11-05 03:31:04.212940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.880 [2024-11-05 03:31:04.212957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:40.880 [2024-11-05 03:31:04.212972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:40.880 [2024-11-05 03:31:04.212985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.880 [2024-11-05 03:31:04.213076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.880 [2024-11-05 03:31:04.213091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:40.880 [2024-11-05 03:31:04.213105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:40.880 [2024-11-05 03:31:04.213118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.880 [2024-11-05 03:31:04.213159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.880 [2024-11-05 03:31:04.213179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:40.880 [2024-11-05 03:31:04.213193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:40.880 [2024-11-05 03:31:04.213205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.880 [2024-11-05 03:31:04.213257] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:40.880 [2024-11-05 03:31:04.213272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.880 [2024-11-05 03:31:04.213287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:40.880 [2024-11-05 03:31:04.213299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:40.880 [2024-11-05 03:31:04.213312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.880 [2024-11-05 03:31:04.252145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.880 [2024-11-05 03:31:04.252339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:40.880 [2024-11-05 03:31:04.252430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.866 ms 00:20:40.880 [2024-11-05 03:31:04.252474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.880 [2024-11-05 03:31:04.252645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.880 [2024-11-05 03:31:04.252892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:40.880 [2024-11-05 03:31:04.252914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:40.880 [2024-11-05 03:31:04.252928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.880 [2024-11-05 03:31:04.254372] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:40.880 [2024-11-05 03:31:04.258660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 462.284 ms, result 0 00:20:40.880 [2024-11-05 03:31:04.259664] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:40.880 [2024-11-05 03:31:04.278625] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:41.817  [2024-11-05T03:31:06.337Z] Copying: 25/256 [MB] (25 MBps) [2024-11-05T03:31:07.714Z] Copying: 48/256 [MB] (23 MBps) [2024-11-05T03:31:08.649Z] Copying: 73/256 [MB] (24 MBps) [2024-11-05T03:31:09.584Z] Copying: 97/256 [MB] (23 MBps) [2024-11-05T03:31:10.517Z] Copying: 121/256 [MB] (24 MBps) [2024-11-05T03:31:11.456Z] Copying: 145/256 [MB] (24 MBps) [2024-11-05T03:31:12.391Z] Copying: 170/256 [MB] (24 MBps) [2024-11-05T03:31:13.327Z] Copying: 194/256 [MB] (23 MBps) [2024-11-05T03:31:14.704Z] Copying: 218/256 [MB] (23 MBps) [2024-11-05T03:31:14.963Z] Copying: 241/256 [MB] (23 MBps) [2024-11-05T03:31:15.532Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-05 03:31:15.391847] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:51.948 [2024-11-05 03:31:15.415949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.948 [2024-11-05 03:31:15.416008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:51.948 [2024-11-05 03:31:15.416029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:51.948 [2024-11-05 03:31:15.416054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.948 [2024-11-05 03:31:15.416090] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:51.948 [2024-11-05 03:31:15.421231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.948 [2024-11-05 03:31:15.421414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:51.948 [2024-11-05 03:31:15.421552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.124 ms 00:20:51.948 [2024-11-05 03:31:15.421572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.948 [2024-11-05 03:31:15.421866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.948 [2024-11-05 03:31:15.421882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:51.948 [2024-11-05 03:31:15.421896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:20:51.948 [2024-11-05 03:31:15.421909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.948 [2024-11-05 03:31:15.424800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.948 [2024-11-05 03:31:15.424937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:51.948 [2024-11-05 03:31:15.425081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.872 ms 00:20:51.948 [2024-11-05 03:31:15.425125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.948 [2024-11-05 03:31:15.430495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.948 [2024-11-05 03:31:15.430641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:51.948 [2024-11-05 03:31:15.430983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.314 ms 00:20:51.948 [2024-11-05 03:31:15.431004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.948 [2024-11-05 03:31:15.465400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.948 [2024-11-05 03:31:15.465444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:51.948 [2024-11-05 03:31:15.465460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.353 ms 00:20:51.948 [2024-11-05 03:31:15.465489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.948 [2024-11-05 03:31:15.486735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.948 [2024-11-05 03:31:15.486786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:51.948 [2024-11-05 03:31:15.486803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.209 ms 00:20:51.948 [2024-11-05 03:31:15.486820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.948 [2024-11-05 03:31:15.486966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.948 [2024-11-05 03:31:15.486981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:51.948 [2024-11-05 03:31:15.486994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:20:51.948 [2024-11-05 03:31:15.487006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.948 [2024-11-05 03:31:15.522183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.948 [2024-11-05 03:31:15.522237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:51.948 [2024-11-05 03:31:15.522252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.197 ms 00:20:51.948 [2024-11-05 03:31:15.522263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.209 [2024-11-05 03:31:15.557167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.209 [2024-11-05 03:31:15.557207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:52.209 [2024-11-05 03:31:15.557222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.867 ms 00:20:52.209 [2024-11-05 03:31:15.557233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.209 [2024-11-05 03:31:15.590616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.209 [2024-11-05 03:31:15.590656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:52.209 [2024-11-05 03:31:15.590671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.360 ms 00:20:52.209 [2024-11-05 03:31:15.590705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.209 [2024-11-05 03:31:15.624830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.209 [2024-11-05 03:31:15.624881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:52.209 [2024-11-05 03:31:15.624897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.065 ms 00:20:52.209 [2024-11-05 03:31:15.624925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.209 [2024-11-05 03:31:15.624989] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:52.209 [2024-11-05 03:31:15.625008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:52.209 [2024-11-05 03:31:15.625671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.625999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:52.210 [2024-11-05 03:31:15.626318] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:52.210 [2024-11-05 03:31:15.626330] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24647937-08ab-4dc6-a95c-fd93b438c7ce 00:20:52.210 [2024-11-05 03:31:15.626343] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:52.210 [2024-11-05 03:31:15.626354] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:52.210 [2024-11-05 03:31:15.626366] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:52.210 [2024-11-05 03:31:15.626378] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:52.210 [2024-11-05 03:31:15.626390] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:52.210 [2024-11-05 03:31:15.626402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:52.210 [2024-11-05 03:31:15.626413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:52.210 [2024-11-05 03:31:15.626424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:52.210 [2024-11-05 03:31:15.626435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:52.210 [2024-11-05 03:31:15.626446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.210 [2024-11-05 03:31:15.626464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:52.210 [2024-11-05 03:31:15.626478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:20:52.210 [2024-11-05 03:31:15.626490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.210 [2024-11-05 03:31:15.646521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.210 [2024-11-05 03:31:15.646557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:52.210 [2024-11-05 03:31:15.646572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.038 ms 00:20:52.210 [2024-11-05 03:31:15.646583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.210 [2024-11-05 03:31:15.647221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.210 [2024-11-05 03:31:15.647248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:52.210 [2024-11-05 03:31:15.647261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:20:52.210 [2024-11-05 03:31:15.647273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.210 [2024-11-05 03:31:15.702329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.210 [2024-11-05 03:31:15.702371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.210 [2024-11-05 03:31:15.702387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.210 [2024-11-05 03:31:15.702416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.210 [2024-11-05 03:31:15.702544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.210 [2024-11-05 03:31:15.702557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.210 [2024-11-05 03:31:15.702570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.210 [2024-11-05 03:31:15.702581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.210 [2024-11-05 03:31:15.702645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.210 [2024-11-05 03:31:15.702661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.210 [2024-11-05 03:31:15.702673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.210 [2024-11-05 03:31:15.702685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.210 [2024-11-05 03:31:15.702716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.210 [2024-11-05 03:31:15.702751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.210 [2024-11-05 03:31:15.702764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.210 [2024-11-05 03:31:15.702776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.470 [2024-11-05 03:31:15.829497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.470 [2024-11-05 03:31:15.829779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:52.470 [2024-11-05 03:31:15.829938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.470 [2024-11-05 03:31:15.829984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.470 [2024-11-05 03:31:15.933481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.470 [2024-11-05 03:31:15.933701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:52.470 [2024-11-05 03:31:15.933792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.470 [2024-11-05 03:31:15.933836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.470 [2024-11-05 03:31:15.933983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.470 [2024-11-05 03:31:15.934024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.470 [2024-11-05 03:31:15.934125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.470 [2024-11-05 03:31:15.934167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.470 [2024-11-05 03:31:15.934240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.471 [2024-11-05 03:31:15.934279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.471 [2024-11-05 03:31:15.934457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.471 [2024-11-05 03:31:15.934473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.471 [2024-11-05 03:31:15.934614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.471 [2024-11-05 03:31:15.934631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.471 [2024-11-05 03:31:15.934645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.471 [2024-11-05 03:31:15.934658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.471 [2024-11-05 03:31:15.934721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.471 [2024-11-05 03:31:15.934735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:52.471 [2024-11-05 03:31:15.934749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.471 [2024-11-05 03:31:15.934769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.471 [2024-11-05 03:31:15.934823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.471 [2024-11-05 03:31:15.934836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.471 [2024-11-05 03:31:15.934850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.471 [2024-11-05 03:31:15.934862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.471 [2024-11-05 03:31:15.934923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.471 [2024-11-05 03:31:15.934938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.471 [2024-11-05 03:31:15.934956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.471 [2024-11-05 03:31:15.934969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.471 [2024-11-05 03:31:15.935159] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 520.058 ms, result 0 00:20:53.850 00:20:53.850 00:20:53.850 03:31:17 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:54.110 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:54.110 03:31:17 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:54.110 03:31:17 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:54.110 03:31:17 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:54.110 03:31:17 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:54.110 03:31:17 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:54.110 03:31:17 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:54.110 03:31:17 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76110 00:20:54.110 03:31:17 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 76110 ']' 00:20:54.110 03:31:17 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 76110 00:20:54.110 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (76110) - No such process 00:20:54.110 Process with pid 76110 is not found 00:20:54.110 03:31:17 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 76110 is not found' 00:20:54.110 00:20:54.110 real 1m17.831s 00:20:54.110 user 1m49.484s 00:20:54.110 sys 0m8.212s 00:20:54.110 03:31:17 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:54.110 ************************************ 00:20:54.110 END TEST ftl_trim 00:20:54.110 ************************************ 00:20:54.110 03:31:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:54.110 03:31:17 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:54.110 03:31:17 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:54.110 03:31:17 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:54.110 03:31:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:54.389 ************************************ 00:20:54.389 START TEST ftl_restore 00:20:54.389 ************************************ 00:20:54.389 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:54.389 * Looking for test storage... 00:20:54.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:54.389 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:54.389 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:20:54.389 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:54.389 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:54.389 03:31:17 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:54.389 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.389 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:54.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.389 --rc genhtml_branch_coverage=1 00:20:54.389 --rc genhtml_function_coverage=1 00:20:54.389 --rc genhtml_legend=1 00:20:54.389 --rc geninfo_all_blocks=1 00:20:54.389 --rc geninfo_unexecuted_blocks=1 00:20:54.389 00:20:54.389 ' 00:20:54.389 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:54.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.389 --rc genhtml_branch_coverage=1 00:20:54.389 --rc genhtml_function_coverage=1 00:20:54.389 --rc genhtml_legend=1 00:20:54.389 --rc geninfo_all_blocks=1 00:20:54.389 --rc geninfo_unexecuted_blocks=1 00:20:54.389 00:20:54.389 ' 00:20:54.389 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:54.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.389 --rc genhtml_branch_coverage=1 00:20:54.389 --rc genhtml_function_coverage=1 00:20:54.389 --rc genhtml_legend=1 00:20:54.389 --rc geninfo_all_blocks=1 00:20:54.389 --rc geninfo_unexecuted_blocks=1 00:20:54.389 00:20:54.389 ' 00:20:54.389 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:54.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.389 --rc genhtml_branch_coverage=1 00:20:54.389 --rc genhtml_function_coverage=1 00:20:54.389 --rc genhtml_legend=1 00:20:54.389 --rc geninfo_all_blocks=1 00:20:54.389 --rc geninfo_unexecuted_blocks=1 00:20:54.389 00:20:54.389 ' 00:20:54.389 03:31:17 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:54.389 03:31:17 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:54.389 03:31:17 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:54.389 03:31:17 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:54.389 03:31:17 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:54.389 03:31:17 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:54.389 03:31:17 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:54.389 03:31:17 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:54.389 03:31:17 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:54.389 03:31:17 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:54.389 03:31:17 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:54.389 03:31:17 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:54.389 03:31:17 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:54.389 03:31:17 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:54.662 03:31:17 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:54.662 03:31:17 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:54.662 03:31:17 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:54.662 03:31:17 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.FZaAkF0qeH 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76406 00:20:54.663 03:31:17 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76406 00:20:54.663 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 76406 ']' 00:20:54.663 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.663 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:54.663 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.663 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:54.663 03:31:17 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:54.663 [2024-11-05 03:31:18.066008] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:20:54.663 [2024-11-05 03:31:18.066298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76406 ] 00:20:54.921 [2024-11-05 03:31:18.250349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.921 [2024-11-05 03:31:18.392452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.858 03:31:19 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:55.858 03:31:19 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:20:55.858 03:31:19 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:55.858 03:31:19 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:55.858 03:31:19 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:55.858 03:31:19 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:55.858 03:31:19 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:55.858 03:31:19 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:56.129 03:31:19 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:56.129 03:31:19 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:56.129 03:31:19 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:56.129 03:31:19 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:20:56.129 03:31:19 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:56.129 03:31:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:56.129 03:31:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:56.129 03:31:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:56.398 03:31:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:56.398 { 00:20:56.398 "name": "nvme0n1", 00:20:56.398 "aliases": [ 00:20:56.398 "ccec1c6a-9fcd-4395-9811-ce4992a210e5" 00:20:56.398 ], 00:20:56.398 "product_name": "NVMe disk", 00:20:56.398 "block_size": 4096, 00:20:56.398 "num_blocks": 1310720, 00:20:56.398 "uuid": "ccec1c6a-9fcd-4395-9811-ce4992a210e5", 00:20:56.398 "numa_id": -1, 00:20:56.398 "assigned_rate_limits": { 00:20:56.398 "rw_ios_per_sec": 0, 00:20:56.398 "rw_mbytes_per_sec": 0, 00:20:56.398 "r_mbytes_per_sec": 0, 00:20:56.398 "w_mbytes_per_sec": 0 00:20:56.398 }, 00:20:56.398 "claimed": true, 00:20:56.398 "claim_type": "read_many_write_one", 00:20:56.398 "zoned": false, 00:20:56.398 "supported_io_types": { 00:20:56.398 "read": true, 00:20:56.398 "write": true, 00:20:56.398 "unmap": true, 00:20:56.398 "flush": true, 00:20:56.398 "reset": true, 00:20:56.398 "nvme_admin": true, 00:20:56.398 "nvme_io": true, 00:20:56.398 "nvme_io_md": false, 00:20:56.398 "write_zeroes": true, 00:20:56.398 "zcopy": false, 00:20:56.398 "get_zone_info": false, 00:20:56.398 "zone_management": false, 00:20:56.398 "zone_append": false, 00:20:56.398 "compare": true, 00:20:56.398 "compare_and_write": false, 00:20:56.398 "abort": true, 00:20:56.398 "seek_hole": false, 00:20:56.398 "seek_data": false, 00:20:56.398 "copy": true, 00:20:56.398 "nvme_iov_md": false 00:20:56.398 }, 00:20:56.398 "driver_specific": { 00:20:56.398 "nvme": [ 00:20:56.398 { 00:20:56.398 "pci_address": "0000:00:11.0", 00:20:56.398 "trid": { 00:20:56.398 "trtype": "PCIe", 00:20:56.398 "traddr": "0000:00:11.0" 00:20:56.398 }, 00:20:56.398 "ctrlr_data": { 00:20:56.398 "cntlid": 0, 00:20:56.398 "vendor_id": "0x1b36", 00:20:56.398 "model_number": "QEMU NVMe Ctrl", 00:20:56.398 "serial_number": "12341", 00:20:56.398 "firmware_revision": "8.0.0", 00:20:56.398 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:56.398 "oacs": { 00:20:56.398 "security": 0, 00:20:56.398 "format": 1, 00:20:56.398 "firmware": 0, 00:20:56.398 "ns_manage": 1 00:20:56.398 }, 00:20:56.398 "multi_ctrlr": false, 00:20:56.398 "ana_reporting": false 00:20:56.398 }, 00:20:56.398 "vs": { 00:20:56.398 "nvme_version": "1.4" 00:20:56.398 }, 00:20:56.398 "ns_data": { 00:20:56.398 "id": 1, 00:20:56.398 "can_share": false 00:20:56.398 } 00:20:56.398 } 00:20:56.398 ], 00:20:56.398 "mp_policy": "active_passive" 00:20:56.398 } 00:20:56.398 } 00:20:56.398 ]' 00:20:56.398 03:31:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:56.398 03:31:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:56.398 03:31:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:56.398 03:31:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:20:56.398 03:31:19 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:20:56.398 03:31:19 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:20:56.398 03:31:19 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:56.398 03:31:19 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:56.398 03:31:19 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:56.398 03:31:19 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:56.398 03:31:19 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:56.657 03:31:20 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=8dfb7b49-d984-4ae2-832a-45f8fecd9639 00:20:56.657 03:31:20 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:56.657 03:31:20 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8dfb7b49-d984-4ae2-832a-45f8fecd9639 00:20:56.915 03:31:20 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:57.173 03:31:20 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=567a822b-1ff2-41b5-b37f-d1d69a7de89b 00:20:57.173 03:31:20 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 567a822b-1ff2-41b5-b37f-d1d69a7de89b 00:20:57.433 03:31:20 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=195f006b-e91b-49c4-a1c8-4bf3d329183a 00:20:57.433 03:31:20 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:57.433 03:31:20 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 195f006b-e91b-49c4-a1c8-4bf3d329183a 00:20:57.433 03:31:20 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:57.433 03:31:20 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:57.433 03:31:20 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=195f006b-e91b-49c4-a1c8-4bf3d329183a 00:20:57.433 03:31:20 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:57.433 03:31:20 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 195f006b-e91b-49c4-a1c8-4bf3d329183a 00:20:57.433 03:31:20 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=195f006b-e91b-49c4-a1c8-4bf3d329183a 00:20:57.433 03:31:20 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:57.433 03:31:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:57.433 03:31:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:57.433 03:31:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 195f006b-e91b-49c4-a1c8-4bf3d329183a 00:20:57.433 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:57.433 { 00:20:57.433 "name": "195f006b-e91b-49c4-a1c8-4bf3d329183a", 00:20:57.433 "aliases": [ 00:20:57.433 "lvs/nvme0n1p0" 00:20:57.433 ], 00:20:57.433 "product_name": "Logical Volume", 00:20:57.433 "block_size": 4096, 00:20:57.433 "num_blocks": 26476544, 00:20:57.433 "uuid": "195f006b-e91b-49c4-a1c8-4bf3d329183a", 00:20:57.433 "assigned_rate_limits": { 00:20:57.433 "rw_ios_per_sec": 0, 00:20:57.433 "rw_mbytes_per_sec": 0, 00:20:57.433 "r_mbytes_per_sec": 0, 00:20:57.433 "w_mbytes_per_sec": 0 00:20:57.433 }, 00:20:57.433 "claimed": false, 00:20:57.433 "zoned": false, 00:20:57.433 "supported_io_types": { 00:20:57.433 "read": true, 00:20:57.433 "write": true, 00:20:57.433 "unmap": true, 00:20:57.433 "flush": false, 00:20:57.433 "reset": true, 00:20:57.433 "nvme_admin": false, 00:20:57.433 "nvme_io": false, 00:20:57.433 "nvme_io_md": false, 00:20:57.433 "write_zeroes": true, 00:20:57.433 "zcopy": false, 00:20:57.433 "get_zone_info": false, 00:20:57.433 "zone_management": false, 00:20:57.433 "zone_append": false, 00:20:57.433 "compare": false, 00:20:57.433 "compare_and_write": false, 00:20:57.433 "abort": false, 00:20:57.433 "seek_hole": true, 00:20:57.433 "seek_data": true, 00:20:57.433 "copy": false, 00:20:57.433 "nvme_iov_md": false 00:20:57.433 }, 00:20:57.433 "driver_specific": { 00:20:57.433 "lvol": { 00:20:57.433 "lvol_store_uuid": "567a822b-1ff2-41b5-b37f-d1d69a7de89b", 00:20:57.433 "base_bdev": "nvme0n1", 00:20:57.433 "thin_provision": true, 00:20:57.433 "num_allocated_clusters": 0, 00:20:57.433 "snapshot": false, 00:20:57.433 "clone": false, 00:20:57.433 "esnap_clone": false 00:20:57.433 } 00:20:57.433 } 00:20:57.433 } 00:20:57.433 ]' 00:20:57.433 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:57.692 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:57.692 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:57.692 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:57.692 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:57.692 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:20:57.692 03:31:21 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:57.692 03:31:21 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:57.692 03:31:21 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:57.951 03:31:21 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:57.951 03:31:21 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:57.951 03:31:21 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 195f006b-e91b-49c4-a1c8-4bf3d329183a 00:20:57.951 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=195f006b-e91b-49c4-a1c8-4bf3d329183a 00:20:57.951 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:57.951 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:57.951 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:57.951 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 195f006b-e91b-49c4-a1c8-4bf3d329183a 00:20:57.951 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:57.951 { 00:20:57.951 "name": "195f006b-e91b-49c4-a1c8-4bf3d329183a", 00:20:57.951 "aliases": [ 00:20:57.951 "lvs/nvme0n1p0" 00:20:57.951 ], 00:20:57.951 "product_name": "Logical Volume", 00:20:57.951 "block_size": 4096, 00:20:57.951 "num_blocks": 26476544, 00:20:57.951 "uuid": "195f006b-e91b-49c4-a1c8-4bf3d329183a", 00:20:57.951 "assigned_rate_limits": { 00:20:57.951 "rw_ios_per_sec": 0, 00:20:57.951 "rw_mbytes_per_sec": 0, 00:20:57.951 "r_mbytes_per_sec": 0, 00:20:57.951 "w_mbytes_per_sec": 0 00:20:57.951 }, 00:20:57.951 "claimed": false, 00:20:57.951 "zoned": false, 00:20:57.951 "supported_io_types": { 00:20:57.951 "read": true, 00:20:57.951 "write": true, 00:20:57.951 "unmap": true, 00:20:57.951 "flush": false, 00:20:57.951 "reset": true, 00:20:57.951 "nvme_admin": false, 00:20:57.951 "nvme_io": false, 00:20:57.951 "nvme_io_md": false, 00:20:57.951 "write_zeroes": true, 00:20:57.951 "zcopy": false, 00:20:57.951 "get_zone_info": false, 00:20:57.951 "zone_management": false, 00:20:57.951 "zone_append": false, 00:20:57.951 "compare": false, 00:20:57.951 "compare_and_write": false, 00:20:57.951 "abort": false, 00:20:57.951 "seek_hole": true, 00:20:57.951 "seek_data": true, 00:20:57.951 "copy": false, 00:20:57.951 "nvme_iov_md": false 00:20:57.951 }, 00:20:57.951 "driver_specific": { 00:20:57.951 "lvol": { 00:20:57.951 "lvol_store_uuid": "567a822b-1ff2-41b5-b37f-d1d69a7de89b", 00:20:57.951 "base_bdev": "nvme0n1", 00:20:57.951 "thin_provision": true, 00:20:57.951 "num_allocated_clusters": 0, 00:20:57.951 "snapshot": false, 00:20:57.951 "clone": false, 00:20:57.951 "esnap_clone": false 00:20:57.951 } 00:20:57.951 } 00:20:57.951 } 00:20:57.951 ]' 00:20:57.951 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:58.210 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:58.210 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:58.210 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:58.210 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:58.210 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:20:58.210 03:31:21 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:58.210 03:31:21 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:58.468 03:31:21 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:58.469 03:31:21 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 195f006b-e91b-49c4-a1c8-4bf3d329183a 00:20:58.469 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=195f006b-e91b-49c4-a1c8-4bf3d329183a 00:20:58.469 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:58.469 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:58.469 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:58.469 03:31:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 195f006b-e91b-49c4-a1c8-4bf3d329183a 00:20:58.469 03:31:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:58.469 { 00:20:58.469 "name": "195f006b-e91b-49c4-a1c8-4bf3d329183a", 00:20:58.469 "aliases": [ 00:20:58.469 "lvs/nvme0n1p0" 00:20:58.469 ], 00:20:58.469 "product_name": "Logical Volume", 00:20:58.469 "block_size": 4096, 00:20:58.469 "num_blocks": 26476544, 00:20:58.469 "uuid": "195f006b-e91b-49c4-a1c8-4bf3d329183a", 00:20:58.469 "assigned_rate_limits": { 00:20:58.469 "rw_ios_per_sec": 0, 00:20:58.469 "rw_mbytes_per_sec": 0, 00:20:58.469 "r_mbytes_per_sec": 0, 00:20:58.469 "w_mbytes_per_sec": 0 00:20:58.469 }, 00:20:58.469 "claimed": false, 00:20:58.469 "zoned": false, 00:20:58.469 "supported_io_types": { 00:20:58.469 "read": true, 00:20:58.469 "write": true, 00:20:58.469 "unmap": true, 00:20:58.469 "flush": false, 00:20:58.469 "reset": true, 00:20:58.469 "nvme_admin": false, 00:20:58.469 "nvme_io": false, 00:20:58.469 "nvme_io_md": false, 00:20:58.469 "write_zeroes": true, 00:20:58.469 "zcopy": false, 00:20:58.469 "get_zone_info": false, 00:20:58.469 "zone_management": false, 00:20:58.469 "zone_append": false, 00:20:58.469 "compare": false, 00:20:58.469 "compare_and_write": false, 00:20:58.469 "abort": false, 00:20:58.469 "seek_hole": true, 00:20:58.469 "seek_data": true, 00:20:58.469 "copy": false, 00:20:58.469 "nvme_iov_md": false 00:20:58.469 }, 00:20:58.469 "driver_specific": { 00:20:58.469 "lvol": { 00:20:58.469 "lvol_store_uuid": "567a822b-1ff2-41b5-b37f-d1d69a7de89b", 00:20:58.469 "base_bdev": "nvme0n1", 00:20:58.469 "thin_provision": true, 00:20:58.469 "num_allocated_clusters": 0, 00:20:58.469 "snapshot": false, 00:20:58.469 "clone": false, 00:20:58.469 "esnap_clone": false 00:20:58.469 } 00:20:58.469 } 00:20:58.469 } 00:20:58.469 ]' 00:20:58.469 03:31:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:58.728 03:31:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:58.728 03:31:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:58.728 03:31:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:58.728 03:31:22 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:58.728 03:31:22 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:20:58.728 03:31:22 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:58.728 03:31:22 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 195f006b-e91b-49c4-a1c8-4bf3d329183a --l2p_dram_limit 10' 00:20:58.728 03:31:22 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:58.728 03:31:22 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:58.728 03:31:22 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:58.728 03:31:22 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:58.728 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:58.728 03:31:22 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 195f006b-e91b-49c4-a1c8-4bf3d329183a --l2p_dram_limit 10 -c nvc0n1p0 00:20:58.988 [2024-11-05 03:31:22.320183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.988 [2024-11-05 03:31:22.320261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:58.988 [2024-11-05 03:31:22.320307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:58.988 [2024-11-05 03:31:22.320322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.988 [2024-11-05 03:31:22.320447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.988 [2024-11-05 03:31:22.320464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.988 [2024-11-05 03:31:22.320482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:58.988 [2024-11-05 03:31:22.320496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.988 [2024-11-05 03:31:22.320541] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:58.988 [2024-11-05 03:31:22.321734] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:58.988 [2024-11-05 03:31:22.321786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.988 [2024-11-05 03:31:22.321801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.988 [2024-11-05 03:31:22.321819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:20:58.988 [2024-11-05 03:31:22.321833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.988 [2024-11-05 03:31:22.321938] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8f0dba11-4f31-4028-aa3d-142ac12375d9 00:20:58.988 [2024-11-05 03:31:22.325375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.988 [2024-11-05 03:31:22.325424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:58.988 [2024-11-05 03:31:22.325439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:58.988 [2024-11-05 03:31:22.325457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.988 [2024-11-05 03:31:22.339791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.988 [2024-11-05 03:31:22.340060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.988 [2024-11-05 03:31:22.340093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.276 ms 00:20:58.988 [2024-11-05 03:31:22.340109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.988 [2024-11-05 03:31:22.340246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.988 [2024-11-05 03:31:22.340267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.988 [2024-11-05 03:31:22.340281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:58.988 [2024-11-05 03:31:22.340332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.988 [2024-11-05 03:31:22.340408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.988 [2024-11-05 03:31:22.340427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:58.988 [2024-11-05 03:31:22.340441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:58.988 [2024-11-05 03:31:22.340463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.988 [2024-11-05 03:31:22.340499] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:58.988 [2024-11-05 03:31:22.346779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.988 [2024-11-05 03:31:22.346944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.988 [2024-11-05 03:31:22.346976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.298 ms 00:20:58.988 [2024-11-05 03:31:22.346990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.988 [2024-11-05 03:31:22.347042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.988 [2024-11-05 03:31:22.347056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:58.988 [2024-11-05 03:31:22.347073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:58.988 [2024-11-05 03:31:22.347086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.988 [2024-11-05 03:31:22.347134] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:58.988 [2024-11-05 03:31:22.347282] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:58.988 [2024-11-05 03:31:22.347331] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:58.988 [2024-11-05 03:31:22.347350] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:58.988 [2024-11-05 03:31:22.347369] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:58.988 [2024-11-05 03:31:22.347385] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:58.988 [2024-11-05 03:31:22.347403] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:58.988 [2024-11-05 03:31:22.347416] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:58.988 [2024-11-05 03:31:22.347437] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:58.988 [2024-11-05 03:31:22.347450] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:58.988 [2024-11-05 03:31:22.347466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.988 [2024-11-05 03:31:22.347480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:58.988 [2024-11-05 03:31:22.347497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:20:58.988 [2024-11-05 03:31:22.347524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.988 [2024-11-05 03:31:22.347610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.988 [2024-11-05 03:31:22.347624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:58.988 [2024-11-05 03:31:22.347641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:58.988 [2024-11-05 03:31:22.347654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.988 [2024-11-05 03:31:22.347764] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:58.988 [2024-11-05 03:31:22.347779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:58.988 [2024-11-05 03:31:22.347796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.988 [2024-11-05 03:31:22.347810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.988 [2024-11-05 03:31:22.347827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:58.988 [2024-11-05 03:31:22.347839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:58.988 [2024-11-05 03:31:22.347855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:58.989 [2024-11-05 03:31:22.347867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:58.989 [2024-11-05 03:31:22.347882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:58.989 [2024-11-05 03:31:22.347895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.989 [2024-11-05 03:31:22.347909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:58.989 [2024-11-05 03:31:22.347921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:58.989 [2024-11-05 03:31:22.347937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.989 [2024-11-05 03:31:22.347950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:58.989 [2024-11-05 03:31:22.347966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:58.989 [2024-11-05 03:31:22.347977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.989 [2024-11-05 03:31:22.347994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:58.989 [2024-11-05 03:31:22.348007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:58.989 [2024-11-05 03:31:22.348024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.989 [2024-11-05 03:31:22.348035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:58.989 [2024-11-05 03:31:22.348050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:58.989 [2024-11-05 03:31:22.348063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.989 [2024-11-05 03:31:22.348079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:58.989 [2024-11-05 03:31:22.348091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:58.989 [2024-11-05 03:31:22.348107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.989 [2024-11-05 03:31:22.348119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:58.989 [2024-11-05 03:31:22.348136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:58.989 [2024-11-05 03:31:22.348148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.989 [2024-11-05 03:31:22.348164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:58.989 [2024-11-05 03:31:22.348176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:58.989 [2024-11-05 03:31:22.348190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.989 [2024-11-05 03:31:22.348202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:58.989 [2024-11-05 03:31:22.348221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:58.989 [2024-11-05 03:31:22.348232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.989 [2024-11-05 03:31:22.348247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:58.989 [2024-11-05 03:31:22.348259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:58.989 [2024-11-05 03:31:22.348274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.989 [2024-11-05 03:31:22.348298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:58.989 [2024-11-05 03:31:22.348314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:58.989 [2024-11-05 03:31:22.348327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.989 [2024-11-05 03:31:22.348342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:58.989 [2024-11-05 03:31:22.348354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:58.989 [2024-11-05 03:31:22.348370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.989 [2024-11-05 03:31:22.348382] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:58.989 [2024-11-05 03:31:22.348399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:58.989 [2024-11-05 03:31:22.348411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.989 [2024-11-05 03:31:22.348440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.989 [2024-11-05 03:31:22.348452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:58.989 [2024-11-05 03:31:22.348471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:58.989 [2024-11-05 03:31:22.348483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:58.989 [2024-11-05 03:31:22.348499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:58.989 [2024-11-05 03:31:22.348511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:58.989 [2024-11-05 03:31:22.348526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:58.989 [2024-11-05 03:31:22.348545] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:58.989 [2024-11-05 03:31:22.348565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.989 [2024-11-05 03:31:22.348584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:58.989 [2024-11-05 03:31:22.348602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:58.989 [2024-11-05 03:31:22.348615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:58.989 [2024-11-05 03:31:22.348634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:58.989 [2024-11-05 03:31:22.348648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:58.989 [2024-11-05 03:31:22.348665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:58.989 [2024-11-05 03:31:22.348678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:58.989 [2024-11-05 03:31:22.348694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:58.989 [2024-11-05 03:31:22.348707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:58.989 [2024-11-05 03:31:22.348728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:58.989 [2024-11-05 03:31:22.348741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:58.989 [2024-11-05 03:31:22.348757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:58.989 [2024-11-05 03:31:22.348771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:58.989 [2024-11-05 03:31:22.348788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:58.989 [2024-11-05 03:31:22.348801] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:58.989 [2024-11-05 03:31:22.348818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.989 [2024-11-05 03:31:22.348832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:58.989 [2024-11-05 03:31:22.348848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:58.989 [2024-11-05 03:31:22.348861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:58.989 [2024-11-05 03:31:22.348877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:58.989 [2024-11-05 03:31:22.348890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.989 [2024-11-05 03:31:22.348906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:58.989 [2024-11-05 03:31:22.348920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:20:58.989 [2024-11-05 03:31:22.348936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.989 [2024-11-05 03:31:22.348988] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:58.989 [2024-11-05 03:31:22.349012] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:03.201 [2024-11-05 03:31:26.647674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.201 [2024-11-05 03:31:26.647776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:03.201 [2024-11-05 03:31:26.647799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4305.663 ms 00:21:03.201 [2024-11-05 03:31:26.647816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.201 [2024-11-05 03:31:26.694675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.201 [2024-11-05 03:31:26.695046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:03.201 [2024-11-05 03:31:26.695080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.558 ms 00:21:03.201 [2024-11-05 03:31:26.695099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.201 [2024-11-05 03:31:26.695260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.201 [2024-11-05 03:31:26.695280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:03.201 [2024-11-05 03:31:26.695325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:03.201 [2024-11-05 03:31:26.695348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.201 [2024-11-05 03:31:26.746946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.201 [2024-11-05 03:31:26.747004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:03.201 [2024-11-05 03:31:26.747020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.624 ms 00:21:03.201 [2024-11-05 03:31:26.747037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.201 [2024-11-05 03:31:26.747081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.201 [2024-11-05 03:31:26.747103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:03.201 [2024-11-05 03:31:26.747117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:03.201 [2024-11-05 03:31:26.747132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.201 [2024-11-05 03:31:26.748024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.201 [2024-11-05 03:31:26.748064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:03.201 [2024-11-05 03:31:26.748079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:21:03.201 [2024-11-05 03:31:26.748096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.201 [2024-11-05 03:31:26.748215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.201 [2024-11-05 03:31:26.748234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:03.201 [2024-11-05 03:31:26.748251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:03.201 [2024-11-05 03:31:26.748271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.201 [2024-11-05 03:31:26.773150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.201 [2024-11-05 03:31:26.773202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:03.201 [2024-11-05 03:31:26.773218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.880 ms 00:21:03.201 [2024-11-05 03:31:26.773235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.461 [2024-11-05 03:31:26.787656] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:03.461 [2024-11-05 03:31:26.792835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.461 [2024-11-05 03:31:26.793039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:03.461 [2024-11-05 03:31:26.793071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.486 ms 00:21:03.461 [2024-11-05 03:31:26.793084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.461 [2024-11-05 03:31:26.899933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.461 [2024-11-05 03:31:26.899996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:03.461 [2024-11-05 03:31:26.900021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.964 ms 00:21:03.461 [2024-11-05 03:31:26.900035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.461 [2024-11-05 03:31:26.900279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.461 [2024-11-05 03:31:26.900324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:03.461 [2024-11-05 03:31:26.900367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:21:03.461 [2024-11-05 03:31:26.900380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.461 [2024-11-05 03:31:26.937423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.461 [2024-11-05 03:31:26.937469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:03.461 [2024-11-05 03:31:26.937490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.033 ms 00:21:03.461 [2024-11-05 03:31:26.937504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.461 [2024-11-05 03:31:26.972413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.461 [2024-11-05 03:31:26.972456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:03.461 [2024-11-05 03:31:26.972478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.902 ms 00:21:03.461 [2024-11-05 03:31:26.972491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.461 [2024-11-05 03:31:26.973223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.461 [2024-11-05 03:31:26.973250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:03.461 [2024-11-05 03:31:26.973269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:21:03.461 [2024-11-05 03:31:26.973283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.721 [2024-11-05 03:31:27.075922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.721 [2024-11-05 03:31:27.075970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:03.721 [2024-11-05 03:31:27.075995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.716 ms 00:21:03.721 [2024-11-05 03:31:27.076009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.721 [2024-11-05 03:31:27.112849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.721 [2024-11-05 03:31:27.112895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:03.721 [2024-11-05 03:31:27.112916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.800 ms 00:21:03.721 [2024-11-05 03:31:27.112928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.721 [2024-11-05 03:31:27.146014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.721 [2024-11-05 03:31:27.146053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:03.721 [2024-11-05 03:31:27.146073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.086 ms 00:21:03.721 [2024-11-05 03:31:27.146085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.721 [2024-11-05 03:31:27.180332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.721 [2024-11-05 03:31:27.180374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:03.721 [2024-11-05 03:31:27.180394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.249 ms 00:21:03.721 [2024-11-05 03:31:27.180407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.721 [2024-11-05 03:31:27.180464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.721 [2024-11-05 03:31:27.180478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:03.721 [2024-11-05 03:31:27.180512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:03.721 [2024-11-05 03:31:27.180525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.721 [2024-11-05 03:31:27.180673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.721 [2024-11-05 03:31:27.180690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:03.721 [2024-11-05 03:31:27.180711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:03.721 [2024-11-05 03:31:27.180722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.721 [2024-11-05 03:31:27.182188] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4869.372 ms, result 0 00:21:03.721 { 00:21:03.721 "name": "ftl0", 00:21:03.721 "uuid": "8f0dba11-4f31-4028-aa3d-142ac12375d9" 00:21:03.721 } 00:21:03.721 03:31:27 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:03.721 03:31:27 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:03.981 03:31:27 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:03.981 03:31:27 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:04.241 [2024-11-05 03:31:27.592561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.241 [2024-11-05 03:31:27.592614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:04.241 [2024-11-05 03:31:27.592630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:04.241 [2024-11-05 03:31:27.592657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.241 [2024-11-05 03:31:27.592686] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:04.241 [2024-11-05 03:31:27.597319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.241 [2024-11-05 03:31:27.597355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:04.241 [2024-11-05 03:31:27.597373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.614 ms 00:21:04.241 [2024-11-05 03:31:27.597385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.241 [2024-11-05 03:31:27.597637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.241 [2024-11-05 03:31:27.597652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:04.241 [2024-11-05 03:31:27.597673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:21:04.241 [2024-11-05 03:31:27.597685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.241 [2024-11-05 03:31:27.600128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.241 [2024-11-05 03:31:27.600157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:04.241 [2024-11-05 03:31:27.600175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.422 ms 00:21:04.241 [2024-11-05 03:31:27.600187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.241 [2024-11-05 03:31:27.605017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.241 [2024-11-05 03:31:27.605053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:04.241 [2024-11-05 03:31:27.605076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.807 ms 00:21:04.241 [2024-11-05 03:31:27.605088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.241 [2024-11-05 03:31:27.640024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.241 [2024-11-05 03:31:27.640065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:04.241 [2024-11-05 03:31:27.640085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.907 ms 00:21:04.241 [2024-11-05 03:31:27.640096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.241 [2024-11-05 03:31:27.662785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.241 [2024-11-05 03:31:27.662828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:04.241 [2024-11-05 03:31:27.662848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.670 ms 00:21:04.241 [2024-11-05 03:31:27.662861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.241 [2024-11-05 03:31:27.663017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.241 [2024-11-05 03:31:27.663034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:04.241 [2024-11-05 03:31:27.663050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:04.241 [2024-11-05 03:31:27.663063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.241 [2024-11-05 03:31:27.698453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.241 [2024-11-05 03:31:27.698493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:04.241 [2024-11-05 03:31:27.698512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.420 ms 00:21:04.241 [2024-11-05 03:31:27.698523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.241 [2024-11-05 03:31:27.733829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.241 [2024-11-05 03:31:27.733875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:04.241 [2024-11-05 03:31:27.733895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.311 ms 00:21:04.241 [2024-11-05 03:31:27.733907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.241 [2024-11-05 03:31:27.767638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.241 [2024-11-05 03:31:27.767678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:04.241 [2024-11-05 03:31:27.767696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.729 ms 00:21:04.241 [2024-11-05 03:31:27.767707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.241 [2024-11-05 03:31:27.801587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.241 [2024-11-05 03:31:27.801625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:04.241 [2024-11-05 03:31:27.801644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.815 ms 00:21:04.241 [2024-11-05 03:31:27.801655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.241 [2024-11-05 03:31:27.801703] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:04.241 [2024-11-05 03:31:27.801721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:04.241 [2024-11-05 03:31:27.801739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.801986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.802986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:04.242 [2024-11-05 03:31:27.803002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:04.243 [2024-11-05 03:31:27.803021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:04.243 [2024-11-05 03:31:27.803033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:04.243 [2024-11-05 03:31:27.803049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:04.243 [2024-11-05 03:31:27.803061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:04.243 [2024-11-05 03:31:27.803077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:04.243 [2024-11-05 03:31:27.803090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:04.243 [2024-11-05 03:31:27.803105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:04.243 [2024-11-05 03:31:27.803119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:04.243 [2024-11-05 03:31:27.803136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:04.243 [2024-11-05 03:31:27.803148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:04.243 [2024-11-05 03:31:27.803164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:04.243 [2024-11-05 03:31:27.803176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:04.243 [2024-11-05 03:31:27.803193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:04.243 [2024-11-05 03:31:27.803212] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:04.243 [2024-11-05 03:31:27.803232] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8f0dba11-4f31-4028-aa3d-142ac12375d9 00:21:04.243 [2024-11-05 03:31:27.803245] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:04.243 [2024-11-05 03:31:27.803263] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:04.243 [2024-11-05 03:31:27.803275] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:04.243 [2024-11-05 03:31:27.803306] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:04.243 [2024-11-05 03:31:27.803317] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:04.243 [2024-11-05 03:31:27.803332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:04.243 [2024-11-05 03:31:27.803343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:04.243 [2024-11-05 03:31:27.803357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:04.243 [2024-11-05 03:31:27.803368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:04.243 [2024-11-05 03:31:27.803383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.243 [2024-11-05 03:31:27.803395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:04.243 [2024-11-05 03:31:27.803410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.685 ms 00:21:04.243 [2024-11-05 03:31:27.803421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.503 [2024-11-05 03:31:27.823919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.503 [2024-11-05 03:31:27.823956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:04.503 [2024-11-05 03:31:27.823975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.464 ms 00:21:04.503 [2024-11-05 03:31:27.823987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.503 [2024-11-05 03:31:27.824616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.503 [2024-11-05 03:31:27.824637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:04.503 [2024-11-05 03:31:27.824654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:21:04.503 [2024-11-05 03:31:27.824671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.503 [2024-11-05 03:31:27.890262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.503 [2024-11-05 03:31:27.890314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:04.503 [2024-11-05 03:31:27.890333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.503 [2024-11-05 03:31:27.890347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.503 [2024-11-05 03:31:27.890422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.503 [2024-11-05 03:31:27.890436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:04.503 [2024-11-05 03:31:27.890452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.503 [2024-11-05 03:31:27.890468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.503 [2024-11-05 03:31:27.890579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.503 [2024-11-05 03:31:27.890594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:04.503 [2024-11-05 03:31:27.890609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.503 [2024-11-05 03:31:27.890622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.503 [2024-11-05 03:31:27.890652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.503 [2024-11-05 03:31:27.890665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:04.503 [2024-11-05 03:31:27.890681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.503 [2024-11-05 03:31:27.890703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.503 [2024-11-05 03:31:28.022676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.503 [2024-11-05 03:31:28.022751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:04.503 [2024-11-05 03:31:28.022776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.503 [2024-11-05 03:31:28.022790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.762 [2024-11-05 03:31:28.128194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.762 [2024-11-05 03:31:28.128273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:04.762 [2024-11-05 03:31:28.128322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.762 [2024-11-05 03:31:28.128342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.762 [2024-11-05 03:31:28.128517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.762 [2024-11-05 03:31:28.128533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.762 [2024-11-05 03:31:28.128550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.762 [2024-11-05 03:31:28.128563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.762 [2024-11-05 03:31:28.128644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.762 [2024-11-05 03:31:28.128658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.762 [2024-11-05 03:31:28.128675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.762 [2024-11-05 03:31:28.128688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.762 [2024-11-05 03:31:28.128829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.762 [2024-11-05 03:31:28.128845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.762 [2024-11-05 03:31:28.128861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.762 [2024-11-05 03:31:28.128874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.762 [2024-11-05 03:31:28.128930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.762 [2024-11-05 03:31:28.128944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:04.762 [2024-11-05 03:31:28.128961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.762 [2024-11-05 03:31:28.128973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.762 [2024-11-05 03:31:28.129031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.762 [2024-11-05 03:31:28.129050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:04.762 [2024-11-05 03:31:28.129065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.762 [2024-11-05 03:31:28.129078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.762 [2024-11-05 03:31:28.129144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.762 [2024-11-05 03:31:28.129158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:04.762 [2024-11-05 03:31:28.129175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.762 [2024-11-05 03:31:28.129188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.762 [2024-11-05 03:31:28.129389] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 537.617 ms, result 0 00:21:04.762 true 00:21:04.762 03:31:28 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76406 00:21:04.762 03:31:28 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 76406 ']' 00:21:04.762 03:31:28 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 76406 00:21:04.762 03:31:28 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:21:04.762 03:31:28 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:04.762 03:31:28 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76406 00:21:04.762 03:31:28 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:04.762 killing process with pid 76406 00:21:04.762 03:31:28 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:04.762 03:31:28 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76406' 00:21:04.762 03:31:28 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 76406 00:21:04.762 03:31:28 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 76406 00:21:12.885 03:31:35 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:17.108 262144+0 records in 00:21:17.108 262144+0 records out 00:21:17.108 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.19988 s, 256 MB/s 00:21:17.108 03:31:40 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:18.486 03:31:41 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:18.486 [2024-11-05 03:31:42.055959] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:21:18.486 [2024-11-05 03:31:42.056097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76669 ] 00:21:18.745 [2024-11-05 03:31:42.246087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.004 [2024-11-05 03:31:42.394213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.262 [2024-11-05 03:31:42.832678] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:19.262 [2024-11-05 03:31:42.832750] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:19.522 [2024-11-05 03:31:43.006728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-11-05 03:31:43.006780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:19.522 [2024-11-05 03:31:43.006807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:19.522 [2024-11-05 03:31:43.006818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-11-05 03:31:43.006873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-11-05 03:31:43.006886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:19.522 [2024-11-05 03:31:43.006905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:19.522 [2024-11-05 03:31:43.006916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-11-05 03:31:43.006940] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:19.522 [2024-11-05 03:31:43.007956] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:19.522 [2024-11-05 03:31:43.007981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-11-05 03:31:43.007992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:19.522 [2024-11-05 03:31:43.008004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:21:19.522 [2024-11-05 03:31:43.008015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-11-05 03:31:43.010760] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:19.522 [2024-11-05 03:31:43.031780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-11-05 03:31:43.031819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:19.522 [2024-11-05 03:31:43.031836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.055 ms 00:21:19.522 [2024-11-05 03:31:43.031847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-11-05 03:31:43.031929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-11-05 03:31:43.031943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:19.522 [2024-11-05 03:31:43.031956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:19.522 [2024-11-05 03:31:43.031967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-11-05 03:31:43.044192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-11-05 03:31:43.044224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:19.522 [2024-11-05 03:31:43.044238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.168 ms 00:21:19.522 [2024-11-05 03:31:43.044249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-11-05 03:31:43.044418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-11-05 03:31:43.044434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:19.522 [2024-11-05 03:31:43.044446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:21:19.522 [2024-11-05 03:31:43.044456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-11-05 03:31:43.044514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-11-05 03:31:43.044528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:19.522 [2024-11-05 03:31:43.044539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:19.522 [2024-11-05 03:31:43.044549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-11-05 03:31:43.044577] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:19.522 [2024-11-05 03:31:43.050515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-11-05 03:31:43.050548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:19.522 [2024-11-05 03:31:43.050562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.955 ms 00:21:19.522 [2024-11-05 03:31:43.050580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-11-05 03:31:43.050613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-11-05 03:31:43.050625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:19.522 [2024-11-05 03:31:43.050636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:19.522 [2024-11-05 03:31:43.050646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-11-05 03:31:43.050686] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:19.522 [2024-11-05 03:31:43.050726] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:19.522 [2024-11-05 03:31:43.050766] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:19.522 [2024-11-05 03:31:43.050793] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:19.522 [2024-11-05 03:31:43.050887] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:19.522 [2024-11-05 03:31:43.050901] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:19.522 [2024-11-05 03:31:43.050916] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:19.522 [2024-11-05 03:31:43.050930] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:19.522 [2024-11-05 03:31:43.050943] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:19.522 [2024-11-05 03:31:43.050954] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:19.522 [2024-11-05 03:31:43.050965] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:19.522 [2024-11-05 03:31:43.050975] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:19.522 [2024-11-05 03:31:43.050986] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:19.522 [2024-11-05 03:31:43.051005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-11-05 03:31:43.051016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:19.522 [2024-11-05 03:31:43.051028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:21:19.522 [2024-11-05 03:31:43.051038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-11-05 03:31:43.051112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-11-05 03:31:43.051123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:19.522 [2024-11-05 03:31:43.051134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:19.522 [2024-11-05 03:31:43.051144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-11-05 03:31:43.051248] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:19.522 [2024-11-05 03:31:43.051271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:19.522 [2024-11-05 03:31:43.051283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:19.522 [2024-11-05 03:31:43.051307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:19.522 [2024-11-05 03:31:43.051318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:19.522 [2024-11-05 03:31:43.051328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:19.522 [2024-11-05 03:31:43.051339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:19.522 [2024-11-05 03:31:43.051349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:19.522 [2024-11-05 03:31:43.051359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:19.522 [2024-11-05 03:31:43.051371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:19.522 [2024-11-05 03:31:43.051384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:19.522 [2024-11-05 03:31:43.051394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:19.522 [2024-11-05 03:31:43.051403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:19.522 [2024-11-05 03:31:43.051413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:19.522 [2024-11-05 03:31:43.051423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:19.522 [2024-11-05 03:31:43.051447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:19.522 [2024-11-05 03:31:43.051457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:19.522 [2024-11-05 03:31:43.051467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:19.522 [2024-11-05 03:31:43.051477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:19.522 [2024-11-05 03:31:43.051487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:19.522 [2024-11-05 03:31:43.051496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:19.522 [2024-11-05 03:31:43.051506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:19.522 [2024-11-05 03:31:43.051515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:19.522 [2024-11-05 03:31:43.051525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:19.522 [2024-11-05 03:31:43.051534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:19.522 [2024-11-05 03:31:43.051544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:19.522 [2024-11-05 03:31:43.051553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:19.522 [2024-11-05 03:31:43.051563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:19.522 [2024-11-05 03:31:43.051572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:19.522 [2024-11-05 03:31:43.051582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:19.522 [2024-11-05 03:31:43.051591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:19.522 [2024-11-05 03:31:43.051601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:19.522 [2024-11-05 03:31:43.051610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:19.522 [2024-11-05 03:31:43.051620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:19.522 [2024-11-05 03:31:43.051629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:19.522 [2024-11-05 03:31:43.051638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:19.522 [2024-11-05 03:31:43.051648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:19.522 [2024-11-05 03:31:43.051657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:19.522 [2024-11-05 03:31:43.051666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:19.522 [2024-11-05 03:31:43.051675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:19.522 [2024-11-05 03:31:43.051684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:19.522 [2024-11-05 03:31:43.051693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:19.522 [2024-11-05 03:31:43.051703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:19.522 [2024-11-05 03:31:43.051712] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:19.522 [2024-11-05 03:31:43.051724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:19.522 [2024-11-05 03:31:43.051734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:19.522 [2024-11-05 03:31:43.051744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:19.522 [2024-11-05 03:31:43.051754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:19.522 [2024-11-05 03:31:43.051764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:19.522 [2024-11-05 03:31:43.051773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:19.522 [2024-11-05 03:31:43.051783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:19.522 [2024-11-05 03:31:43.051793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:19.522 [2024-11-05 03:31:43.051803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:19.522 [2024-11-05 03:31:43.051815] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:19.522 [2024-11-05 03:31:43.051828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:19.522 [2024-11-05 03:31:43.051840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:19.522 [2024-11-05 03:31:43.051850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:19.522 [2024-11-05 03:31:43.051860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:19.522 [2024-11-05 03:31:43.051871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:19.522 [2024-11-05 03:31:43.051881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:19.522 [2024-11-05 03:31:43.051891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:19.522 [2024-11-05 03:31:43.051902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:19.522 [2024-11-05 03:31:43.051912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:19.522 [2024-11-05 03:31:43.051923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:19.522 [2024-11-05 03:31:43.051933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:19.522 [2024-11-05 03:31:43.051944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:19.522 [2024-11-05 03:31:43.051954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:19.522 [2024-11-05 03:31:43.051965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:19.522 [2024-11-05 03:31:43.051975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:19.522 [2024-11-05 03:31:43.051985] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:19.522 [2024-11-05 03:31:43.052004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:19.522 [2024-11-05 03:31:43.052016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:19.522 [2024-11-05 03:31:43.052028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:19.522 [2024-11-05 03:31:43.052039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:19.522 [2024-11-05 03:31:43.052050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:19.522 [2024-11-05 03:31:43.052061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-11-05 03:31:43.052072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:19.522 [2024-11-05 03:31:43.052083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:21:19.522 [2024-11-05 03:31:43.052095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-11-05 03:31:43.104166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-11-05 03:31:43.104205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:19.522 [2024-11-05 03:31:43.104221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.098 ms 00:21:19.522 [2024-11-05 03:31:43.104232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-11-05 03:31:43.104337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.523 [2024-11-05 03:31:43.104351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:19.523 [2024-11-05 03:31:43.104362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:19.523 [2024-11-05 03:31:43.104373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-05 03:31:43.170891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.782 [2024-11-05 03:31:43.170931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:19.782 [2024-11-05 03:31:43.170945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.530 ms 00:21:19.782 [2024-11-05 03:31:43.170957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-05 03:31:43.171000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.782 [2024-11-05 03:31:43.171014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:19.782 [2024-11-05 03:31:43.171035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:19.782 [2024-11-05 03:31:43.171059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-05 03:31:43.171842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.782 [2024-11-05 03:31:43.171858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:19.782 [2024-11-05 03:31:43.171871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:21:19.782 [2024-11-05 03:31:43.171881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-05 03:31:43.172024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.782 [2024-11-05 03:31:43.172040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:19.782 [2024-11-05 03:31:43.172052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:21:19.782 [2024-11-05 03:31:43.172070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-05 03:31:43.196438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.782 [2024-11-05 03:31:43.196474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:19.782 [2024-11-05 03:31:43.196496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.384 ms 00:21:19.782 [2024-11-05 03:31:43.196507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-05 03:31:43.217195] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:19.782 [2024-11-05 03:31:43.217236] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:19.782 [2024-11-05 03:31:43.217252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.782 [2024-11-05 03:31:43.217263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:19.782 [2024-11-05 03:31:43.217275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.664 ms 00:21:19.782 [2024-11-05 03:31:43.217294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-05 03:31:43.248155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.782 [2024-11-05 03:31:43.248193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:19.782 [2024-11-05 03:31:43.248220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.865 ms 00:21:19.782 [2024-11-05 03:31:43.248231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-05 03:31:43.266735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.782 [2024-11-05 03:31:43.266787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:19.782 [2024-11-05 03:31:43.266801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.485 ms 00:21:19.782 [2024-11-05 03:31:43.266811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-05 03:31:43.285548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.782 [2024-11-05 03:31:43.285596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:19.782 [2024-11-05 03:31:43.285610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.711 ms 00:21:19.782 [2024-11-05 03:31:43.285620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-05 03:31:43.286429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.782 [2024-11-05 03:31:43.286456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:19.782 [2024-11-05 03:31:43.286470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:21:19.782 [2024-11-05 03:31:43.286482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.041 [2024-11-05 03:31:43.385453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.041 [2024-11-05 03:31:43.385540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:20.041 [2024-11-05 03:31:43.385561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.098 ms 00:21:20.041 [2024-11-05 03:31:43.385584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.041 [2024-11-05 03:31:43.396513] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:20.042 [2024-11-05 03:31:43.400598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.042 [2024-11-05 03:31:43.400631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:20.042 [2024-11-05 03:31:43.400646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.986 ms 00:21:20.042 [2024-11-05 03:31:43.400658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.042 [2024-11-05 03:31:43.400756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.042 [2024-11-05 03:31:43.400771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:20.042 [2024-11-05 03:31:43.400785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:20.042 [2024-11-05 03:31:43.400796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.042 [2024-11-05 03:31:43.400893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.042 [2024-11-05 03:31:43.400908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:20.042 [2024-11-05 03:31:43.400920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:20.042 [2024-11-05 03:31:43.400931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.042 [2024-11-05 03:31:43.400959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.042 [2024-11-05 03:31:43.400970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:20.042 [2024-11-05 03:31:43.400981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:20.042 [2024-11-05 03:31:43.400992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.042 [2024-11-05 03:31:43.401040] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:20.042 [2024-11-05 03:31:43.401055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.042 [2024-11-05 03:31:43.401072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:20.042 [2024-11-05 03:31:43.401083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:20.042 [2024-11-05 03:31:43.401094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.042 [2024-11-05 03:31:43.438794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.042 [2024-11-05 03:31:43.438835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:20.042 [2024-11-05 03:31:43.438851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.740 ms 00:21:20.042 [2024-11-05 03:31:43.438862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.042 [2024-11-05 03:31:43.438961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.042 [2024-11-05 03:31:43.438975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:20.042 [2024-11-05 03:31:43.438987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:20.042 [2024-11-05 03:31:43.438998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.042 [2024-11-05 03:31:43.440536] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 433.862 ms, result 0 00:21:20.979  [2024-11-05T03:31:45.506Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-05T03:31:46.881Z] Copying: 47/1024 [MB] (24 MBps) [2024-11-05T03:31:47.449Z] Copying: 74/1024 [MB] (26 MBps) [2024-11-05T03:31:48.825Z] Copying: 100/1024 [MB] (26 MBps) [2024-11-05T03:31:49.760Z] Copying: 127/1024 [MB] (26 MBps) [2024-11-05T03:31:50.695Z] Copying: 152/1024 [MB] (25 MBps) [2024-11-05T03:31:51.632Z] Copying: 177/1024 [MB] (24 MBps) [2024-11-05T03:31:52.571Z] Copying: 202/1024 [MB] (25 MBps) [2024-11-05T03:31:53.506Z] Copying: 228/1024 [MB] (25 MBps) [2024-11-05T03:31:54.442Z] Copying: 253/1024 [MB] (25 MBps) [2024-11-05T03:31:55.818Z] Copying: 278/1024 [MB] (25 MBps) [2024-11-05T03:31:56.754Z] Copying: 303/1024 [MB] (24 MBps) [2024-11-05T03:31:57.690Z] Copying: 328/1024 [MB] (25 MBps) [2024-11-05T03:31:58.641Z] Copying: 354/1024 [MB] (25 MBps) [2024-11-05T03:31:59.577Z] Copying: 381/1024 [MB] (27 MBps) [2024-11-05T03:32:00.508Z] Copying: 409/1024 [MB] (27 MBps) [2024-11-05T03:32:01.445Z] Copying: 437/1024 [MB] (28 MBps) [2024-11-05T03:32:02.820Z] Copying: 464/1024 [MB] (27 MBps) [2024-11-05T03:32:03.755Z] Copying: 490/1024 [MB] (25 MBps) [2024-11-05T03:32:04.691Z] Copying: 515/1024 [MB] (25 MBps) [2024-11-05T03:32:05.653Z] Copying: 541/1024 [MB] (25 MBps) [2024-11-05T03:32:06.600Z] Copying: 567/1024 [MB] (25 MBps) [2024-11-05T03:32:07.537Z] Copying: 593/1024 [MB] (25 MBps) [2024-11-05T03:32:08.473Z] Copying: 617/1024 [MB] (23 MBps) [2024-11-05T03:32:09.410Z] Copying: 640/1024 [MB] (22 MBps) [2024-11-05T03:32:10.787Z] Copying: 664/1024 [MB] (24 MBps) [2024-11-05T03:32:11.724Z] Copying: 688/1024 [MB] (24 MBps) [2024-11-05T03:32:12.696Z] Copying: 713/1024 [MB] (24 MBps) [2024-11-05T03:32:13.631Z] Copying: 736/1024 [MB] (23 MBps) [2024-11-05T03:32:14.569Z] Copying: 759/1024 [MB] (23 MBps) [2024-11-05T03:32:15.506Z] Copying: 783/1024 [MB] (23 MBps) [2024-11-05T03:32:16.442Z] Copying: 808/1024 [MB] (24 MBps) [2024-11-05T03:32:17.820Z] Copying: 833/1024 [MB] (24 MBps) [2024-11-05T03:32:18.755Z] Copying: 857/1024 [MB] (24 MBps) [2024-11-05T03:32:19.691Z] Copying: 883/1024 [MB] (25 MBps) [2024-11-05T03:32:20.626Z] Copying: 908/1024 [MB] (25 MBps) [2024-11-05T03:32:21.562Z] Copying: 933/1024 [MB] (25 MBps) [2024-11-05T03:32:22.497Z] Copying: 958/1024 [MB] (25 MBps) [2024-11-05T03:32:23.432Z] Copying: 985/1024 [MB] (26 MBps) [2024-11-05T03:32:23.999Z] Copying: 1011/1024 [MB] (26 MBps) [2024-11-05T03:32:23.999Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-05 03:32:23.837856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.415 [2024-11-05 03:32:23.837913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:00.415 [2024-11-05 03:32:23.837930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:00.415 [2024-11-05 03:32:23.837941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.415 [2024-11-05 03:32:23.837963] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:00.415 [2024-11-05 03:32:23.842227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.415 [2024-11-05 03:32:23.842262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:00.415 [2024-11-05 03:32:23.842276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.253 ms 00:22:00.415 [2024-11-05 03:32:23.842295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.415 [2024-11-05 03:32:23.844030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.415 [2024-11-05 03:32:23.844071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:00.415 [2024-11-05 03:32:23.844085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:22:00.415 [2024-11-05 03:32:23.844095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.415 [2024-11-05 03:32:23.861525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.415 [2024-11-05 03:32:23.861566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:00.415 [2024-11-05 03:32:23.861580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.441 ms 00:22:00.415 [2024-11-05 03:32:23.861590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.415 [2024-11-05 03:32:23.866558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.416 [2024-11-05 03:32:23.866598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:00.416 [2024-11-05 03:32:23.866610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.942 ms 00:22:00.416 [2024-11-05 03:32:23.866621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.416 [2024-11-05 03:32:23.903636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.416 [2024-11-05 03:32:23.903677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:00.416 [2024-11-05 03:32:23.903707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.006 ms 00:22:00.416 [2024-11-05 03:32:23.903717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.416 [2024-11-05 03:32:23.925482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.416 [2024-11-05 03:32:23.925521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:00.416 [2024-11-05 03:32:23.925536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.759 ms 00:22:00.416 [2024-11-05 03:32:23.925547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.416 [2024-11-05 03:32:23.925686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.416 [2024-11-05 03:32:23.925704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:00.416 [2024-11-05 03:32:23.925721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:00.416 [2024-11-05 03:32:23.925731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.416 [2024-11-05 03:32:23.963237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.416 [2024-11-05 03:32:23.963274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:00.416 [2024-11-05 03:32:23.963294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.551 ms 00:22:00.416 [2024-11-05 03:32:23.963305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.675 [2024-11-05 03:32:23.999744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.676 [2024-11-05 03:32:23.999784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:00.676 [2024-11-05 03:32:23.999827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.460 ms 00:22:00.676 [2024-11-05 03:32:23.999843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.676 [2024-11-05 03:32:24.035057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.676 [2024-11-05 03:32:24.035098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:00.676 [2024-11-05 03:32:24.035112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.230 ms 00:22:00.676 [2024-11-05 03:32:24.035122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.676 [2024-11-05 03:32:24.071353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.676 [2024-11-05 03:32:24.071392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:00.676 [2024-11-05 03:32:24.071406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.208 ms 00:22:00.676 [2024-11-05 03:32:24.071416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.676 [2024-11-05 03:32:24.071471] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:00.676 [2024-11-05 03:32:24.071488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.071995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:00.676 [2024-11-05 03:32:24.072369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:00.677 [2024-11-05 03:32:24.072611] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:00.677 [2024-11-05 03:32:24.072628] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8f0dba11-4f31-4028-aa3d-142ac12375d9 00:22:00.677 [2024-11-05 03:32:24.072639] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:00.677 [2024-11-05 03:32:24.072653] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:00.677 [2024-11-05 03:32:24.072663] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:00.677 [2024-11-05 03:32:24.072673] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:00.677 [2024-11-05 03:32:24.072682] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:00.677 [2024-11-05 03:32:24.072692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:00.677 [2024-11-05 03:32:24.072702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:00.677 [2024-11-05 03:32:24.072722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:00.677 [2024-11-05 03:32:24.072731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:00.677 [2024-11-05 03:32:24.072740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.677 [2024-11-05 03:32:24.072750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:00.677 [2024-11-05 03:32:24.072761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:22:00.677 [2024-11-05 03:32:24.072770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-05 03:32:24.092441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.677 [2024-11-05 03:32:24.092477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:00.677 [2024-11-05 03:32:24.092490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.655 ms 00:22:00.677 [2024-11-05 03:32:24.092501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-05 03:32:24.093048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.677 [2024-11-05 03:32:24.093065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:00.677 [2024-11-05 03:32:24.093076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:22:00.677 [2024-11-05 03:32:24.093086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-05 03:32:24.144755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.677 [2024-11-05 03:32:24.144796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.677 [2024-11-05 03:32:24.144810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.677 [2024-11-05 03:32:24.144820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-05 03:32:24.144877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.677 [2024-11-05 03:32:24.144888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.677 [2024-11-05 03:32:24.144899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.677 [2024-11-05 03:32:24.144908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-05 03:32:24.144977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.677 [2024-11-05 03:32:24.144990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.677 [2024-11-05 03:32:24.145001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.677 [2024-11-05 03:32:24.145011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-05 03:32:24.145029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.677 [2024-11-05 03:32:24.145039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.677 [2024-11-05 03:32:24.145049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.677 [2024-11-05 03:32:24.145059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.937 [2024-11-05 03:32:24.270461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.937 [2024-11-05 03:32:24.270516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.937 [2024-11-05 03:32:24.270532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.937 [2024-11-05 03:32:24.270543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.937 [2024-11-05 03:32:24.370905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.937 [2024-11-05 03:32:24.370962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.937 [2024-11-05 03:32:24.370978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.937 [2024-11-05 03:32:24.370989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.937 [2024-11-05 03:32:24.371098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.937 [2024-11-05 03:32:24.371117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:00.937 [2024-11-05 03:32:24.371129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.937 [2024-11-05 03:32:24.371138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.937 [2024-11-05 03:32:24.371186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.937 [2024-11-05 03:32:24.371197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:00.937 [2024-11-05 03:32:24.371207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.937 [2024-11-05 03:32:24.371218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.937 [2024-11-05 03:32:24.371507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.937 [2024-11-05 03:32:24.371529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:00.937 [2024-11-05 03:32:24.371540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.937 [2024-11-05 03:32:24.371550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.937 [2024-11-05 03:32:24.371592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.937 [2024-11-05 03:32:24.371605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:00.937 [2024-11-05 03:32:24.371617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.937 [2024-11-05 03:32:24.371627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.937 [2024-11-05 03:32:24.371666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.937 [2024-11-05 03:32:24.371677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:00.937 [2024-11-05 03:32:24.371692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.937 [2024-11-05 03:32:24.371703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.937 [2024-11-05 03:32:24.371746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.937 [2024-11-05 03:32:24.371758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:00.937 [2024-11-05 03:32:24.371768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.937 [2024-11-05 03:32:24.371779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.937 [2024-11-05 03:32:24.371909] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 534.888 ms, result 0 00:22:01.958 00:22:01.958 00:22:01.958 03:32:25 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:02.216 [2024-11-05 03:32:25.630369] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:22:02.216 [2024-11-05 03:32:25.630502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77113 ] 00:22:02.474 [2024-11-05 03:32:25.812980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.474 [2024-11-05 03:32:25.924735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.731 [2024-11-05 03:32:26.286221] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.731 [2024-11-05 03:32:26.286299] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.988 [2024-11-05 03:32:26.447110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.988 [2024-11-05 03:32:26.447163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:02.988 [2024-11-05 03:32:26.447186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:02.988 [2024-11-05 03:32:26.447196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.988 [2024-11-05 03:32:26.447246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.988 [2024-11-05 03:32:26.447259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.988 [2024-11-05 03:32:26.447273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:02.988 [2024-11-05 03:32:26.447283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.988 [2024-11-05 03:32:26.447318] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:02.988 [2024-11-05 03:32:26.448384] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:02.988 [2024-11-05 03:32:26.448418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.988 [2024-11-05 03:32:26.448429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.988 [2024-11-05 03:32:26.448441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:22:02.988 [2024-11-05 03:32:26.448461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.988 [2024-11-05 03:32:26.449898] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:02.988 [2024-11-05 03:32:26.469849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.988 [2024-11-05 03:32:26.469889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:02.988 [2024-11-05 03:32:26.469904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.984 ms 00:22:02.988 [2024-11-05 03:32:26.469915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.988 [2024-11-05 03:32:26.469992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.988 [2024-11-05 03:32:26.470005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:02.988 [2024-11-05 03:32:26.470016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:02.988 [2024-11-05 03:32:26.470026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.988 [2024-11-05 03:32:26.476907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.988 [2024-11-05 03:32:26.476937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.988 [2024-11-05 03:32:26.476949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.820 ms 00:22:02.988 [2024-11-05 03:32:26.476963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.988 [2024-11-05 03:32:26.477042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.988 [2024-11-05 03:32:26.477055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.988 [2024-11-05 03:32:26.477066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:02.988 [2024-11-05 03:32:26.477076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.988 [2024-11-05 03:32:26.477118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.988 [2024-11-05 03:32:26.477130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:02.988 [2024-11-05 03:32:26.477140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:02.988 [2024-11-05 03:32:26.477150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.988 [2024-11-05 03:32:26.477177] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.988 [2024-11-05 03:32:26.481990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.988 [2024-11-05 03:32:26.482021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.988 [2024-11-05 03:32:26.482036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.825 ms 00:22:02.988 [2024-11-05 03:32:26.482046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.988 [2024-11-05 03:32:26.482076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.988 [2024-11-05 03:32:26.482087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:02.988 [2024-11-05 03:32:26.482098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:02.988 [2024-11-05 03:32:26.482108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.988 [2024-11-05 03:32:26.482163] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:02.988 [2024-11-05 03:32:26.482187] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:02.989 [2024-11-05 03:32:26.482223] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:02.989 [2024-11-05 03:32:26.482245] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:02.989 [2024-11-05 03:32:26.482348] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:02.989 [2024-11-05 03:32:26.482362] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:02.989 [2024-11-05 03:32:26.482375] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:02.989 [2024-11-05 03:32:26.482388] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:02.989 [2024-11-05 03:32:26.482400] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:02.989 [2024-11-05 03:32:26.482412] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:02.989 [2024-11-05 03:32:26.482422] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:02.989 [2024-11-05 03:32:26.482441] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:02.989 [2024-11-05 03:32:26.482454] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:02.989 [2024-11-05 03:32:26.482465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.989 [2024-11-05 03:32:26.482475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:02.989 [2024-11-05 03:32:26.482487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:22:02.989 [2024-11-05 03:32:26.482497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.989 [2024-11-05 03:32:26.482568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.989 [2024-11-05 03:32:26.482579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:02.989 [2024-11-05 03:32:26.482589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:02.989 [2024-11-05 03:32:26.482599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.989 [2024-11-05 03:32:26.482703] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:02.989 [2024-11-05 03:32:26.482718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:02.989 [2024-11-05 03:32:26.482728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.989 [2024-11-05 03:32:26.482739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.989 [2024-11-05 03:32:26.482749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:02.989 [2024-11-05 03:32:26.482758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:02.989 [2024-11-05 03:32:26.482768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:02.989 [2024-11-05 03:32:26.482777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:02.989 [2024-11-05 03:32:26.482786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:02.989 [2024-11-05 03:32:26.482796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.989 [2024-11-05 03:32:26.482806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:02.989 [2024-11-05 03:32:26.482815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:02.989 [2024-11-05 03:32:26.482823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.989 [2024-11-05 03:32:26.482832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:02.989 [2024-11-05 03:32:26.482842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:02.989 [2024-11-05 03:32:26.482861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.989 [2024-11-05 03:32:26.482870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:02.989 [2024-11-05 03:32:26.482879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:02.989 [2024-11-05 03:32:26.482888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.989 [2024-11-05 03:32:26.482898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:02.989 [2024-11-05 03:32:26.482907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:02.989 [2024-11-05 03:32:26.482916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.989 [2024-11-05 03:32:26.482925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:02.989 [2024-11-05 03:32:26.482934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:02.989 [2024-11-05 03:32:26.482942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.989 [2024-11-05 03:32:26.482951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:02.989 [2024-11-05 03:32:26.482961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:02.989 [2024-11-05 03:32:26.482969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.989 [2024-11-05 03:32:26.482979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:02.989 [2024-11-05 03:32:26.482988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:02.989 [2024-11-05 03:32:26.482997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.989 [2024-11-05 03:32:26.483006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:02.989 [2024-11-05 03:32:26.483015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:02.989 [2024-11-05 03:32:26.483023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.989 [2024-11-05 03:32:26.483033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:02.989 [2024-11-05 03:32:26.483042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:02.989 [2024-11-05 03:32:26.483051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.989 [2024-11-05 03:32:26.483060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:02.989 [2024-11-05 03:32:26.483070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:02.989 [2024-11-05 03:32:26.483079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.989 [2024-11-05 03:32:26.483088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:02.989 [2024-11-05 03:32:26.483096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:02.989 [2024-11-05 03:32:26.483107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.989 [2024-11-05 03:32:26.483116] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:02.989 [2024-11-05 03:32:26.483126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:02.989 [2024-11-05 03:32:26.483135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.989 [2024-11-05 03:32:26.483145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.989 [2024-11-05 03:32:26.483156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:02.989 [2024-11-05 03:32:26.483165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:02.989 [2024-11-05 03:32:26.483174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:02.989 [2024-11-05 03:32:26.483183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:02.989 [2024-11-05 03:32:26.483192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:02.989 [2024-11-05 03:32:26.483202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:02.989 [2024-11-05 03:32:26.483212] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:02.989 [2024-11-05 03:32:26.483225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.989 [2024-11-05 03:32:26.483240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:02.989 [2024-11-05 03:32:26.483250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:02.989 [2024-11-05 03:32:26.483261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:02.989 [2024-11-05 03:32:26.483271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:02.989 [2024-11-05 03:32:26.483281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:02.989 [2024-11-05 03:32:26.483303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:02.989 [2024-11-05 03:32:26.483313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:02.989 [2024-11-05 03:32:26.483323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:02.989 [2024-11-05 03:32:26.483333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:02.989 [2024-11-05 03:32:26.483344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:02.989 [2024-11-05 03:32:26.483353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:02.989 [2024-11-05 03:32:26.483363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:02.989 [2024-11-05 03:32:26.483374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:02.989 [2024-11-05 03:32:26.483385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:02.989 [2024-11-05 03:32:26.483395] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:02.989 [2024-11-05 03:32:26.483406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.989 [2024-11-05 03:32:26.483417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.989 [2024-11-05 03:32:26.483427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:02.989 [2024-11-05 03:32:26.483437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:02.989 [2024-11-05 03:32:26.483449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:02.989 [2024-11-05 03:32:26.483460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.989 [2024-11-05 03:32:26.483470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:02.989 [2024-11-05 03:32:26.483481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:22:02.989 [2024-11-05 03:32:26.483491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.989 [2024-11-05 03:32:26.522707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.989 [2024-11-05 03:32:26.522747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.989 [2024-11-05 03:32:26.522762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.233 ms 00:22:02.989 [2024-11-05 03:32:26.522777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.989 [2024-11-05 03:32:26.522862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.989 [2024-11-05 03:32:26.522873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:02.989 [2024-11-05 03:32:26.522884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:02.989 [2024-11-05 03:32:26.522894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.247 [2024-11-05 03:32:26.594317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.247 [2024-11-05 03:32:26.594361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:03.247 [2024-11-05 03:32:26.594374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.470 ms 00:22:03.247 [2024-11-05 03:32:26.594385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.247 [2024-11-05 03:32:26.594430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.247 [2024-11-05 03:32:26.594441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:03.247 [2024-11-05 03:32:26.594456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:03.247 [2024-11-05 03:32:26.594466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.247 [2024-11-05 03:32:26.594985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.247 [2024-11-05 03:32:26.595008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:03.247 [2024-11-05 03:32:26.595020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:22:03.247 [2024-11-05 03:32:26.595031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.247 [2024-11-05 03:32:26.595156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.247 [2024-11-05 03:32:26.595182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:03.247 [2024-11-05 03:32:26.595198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:22:03.247 [2024-11-05 03:32:26.595208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.247 [2024-11-05 03:32:26.615951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.247 [2024-11-05 03:32:26.615989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:03.247 [2024-11-05 03:32:26.616003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.756 ms 00:22:03.247 [2024-11-05 03:32:26.616013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.247 [2024-11-05 03:32:26.635999] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:03.247 [2024-11-05 03:32:26.636045] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:03.248 [2024-11-05 03:32:26.636061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.248 [2024-11-05 03:32:26.636072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:03.248 [2024-11-05 03:32:26.636085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.979 ms 00:22:03.248 [2024-11-05 03:32:26.636095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.248 [2024-11-05 03:32:26.666658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.248 [2024-11-05 03:32:26.666719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:03.248 [2024-11-05 03:32:26.666735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.570 ms 00:22:03.248 [2024-11-05 03:32:26.666747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.248 [2024-11-05 03:32:26.684741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.248 [2024-11-05 03:32:26.684780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:03.248 [2024-11-05 03:32:26.684793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.975 ms 00:22:03.248 [2024-11-05 03:32:26.684802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.248 [2024-11-05 03:32:26.703036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.248 [2024-11-05 03:32:26.703073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:03.248 [2024-11-05 03:32:26.703086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.223 ms 00:22:03.248 [2024-11-05 03:32:26.703096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.248 [2024-11-05 03:32:26.703867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.248 [2024-11-05 03:32:26.703900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:03.248 [2024-11-05 03:32:26.703916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:22:03.248 [2024-11-05 03:32:26.703926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.248 [2024-11-05 03:32:26.790400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.248 [2024-11-05 03:32:26.790463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:03.248 [2024-11-05 03:32:26.790487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.591 ms 00:22:03.248 [2024-11-05 03:32:26.790499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.248 [2024-11-05 03:32:26.801450] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:03.248 [2024-11-05 03:32:26.804034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.248 [2024-11-05 03:32:26.804067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:03.248 [2024-11-05 03:32:26.804081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.506 ms 00:22:03.248 [2024-11-05 03:32:26.804091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.248 [2024-11-05 03:32:26.804176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.248 [2024-11-05 03:32:26.804189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:03.248 [2024-11-05 03:32:26.804201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:03.248 [2024-11-05 03:32:26.804215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.248 [2024-11-05 03:32:26.804315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.248 [2024-11-05 03:32:26.804328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:03.248 [2024-11-05 03:32:26.804339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:03.248 [2024-11-05 03:32:26.804349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.248 [2024-11-05 03:32:26.804374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.248 [2024-11-05 03:32:26.804384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:03.248 [2024-11-05 03:32:26.804394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:03.248 [2024-11-05 03:32:26.804404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.248 [2024-11-05 03:32:26.804441] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:03.248 [2024-11-05 03:32:26.804453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.248 [2024-11-05 03:32:26.804463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:03.248 [2024-11-05 03:32:26.804475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:03.248 [2024-11-05 03:32:26.804484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.506 [2024-11-05 03:32:26.840800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.506 [2024-11-05 03:32:26.840841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:03.506 [2024-11-05 03:32:26.840862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.352 ms 00:22:03.506 [2024-11-05 03:32:26.840873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.506 [2024-11-05 03:32:26.840952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.506 [2024-11-05 03:32:26.840965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:03.506 [2024-11-05 03:32:26.840976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:03.506 [2024-11-05 03:32:26.840986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.506 [2024-11-05 03:32:26.842103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.182 ms, result 0 00:22:04.882  [2024-11-05T03:32:29.403Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-05T03:32:30.338Z] Copying: 52/1024 [MB] (24 MBps) [2024-11-05T03:32:31.275Z] Copying: 76/1024 [MB] (24 MBps) [2024-11-05T03:32:32.212Z] Copying: 100/1024 [MB] (23 MBps) [2024-11-05T03:32:33.150Z] Copying: 124/1024 [MB] (24 MBps) [2024-11-05T03:32:34.088Z] Copying: 150/1024 [MB] (25 MBps) [2024-11-05T03:32:35.468Z] Copying: 175/1024 [MB] (25 MBps) [2024-11-05T03:32:36.061Z] Copying: 202/1024 [MB] (26 MBps) [2024-11-05T03:32:37.441Z] Copying: 230/1024 [MB] (27 MBps) [2024-11-05T03:32:38.377Z] Copying: 257/1024 [MB] (27 MBps) [2024-11-05T03:32:39.336Z] Copying: 284/1024 [MB] (27 MBps) [2024-11-05T03:32:40.279Z] Copying: 311/1024 [MB] (26 MBps) [2024-11-05T03:32:41.216Z] Copying: 337/1024 [MB] (26 MBps) [2024-11-05T03:32:42.151Z] Copying: 364/1024 [MB] (26 MBps) [2024-11-05T03:32:43.084Z] Copying: 391/1024 [MB] (27 MBps) [2024-11-05T03:32:44.464Z] Copying: 418/1024 [MB] (26 MBps) [2024-11-05T03:32:45.409Z] Copying: 444/1024 [MB] (26 MBps) [2024-11-05T03:32:46.380Z] Copying: 470/1024 [MB] (26 MBps) [2024-11-05T03:32:47.316Z] Copying: 497/1024 [MB] (26 MBps) [2024-11-05T03:32:48.253Z] Copying: 523/1024 [MB] (26 MBps) [2024-11-05T03:32:49.191Z] Copying: 550/1024 [MB] (26 MBps) [2024-11-05T03:32:50.127Z] Copying: 577/1024 [MB] (26 MBps) [2024-11-05T03:32:51.063Z] Copying: 603/1024 [MB] (26 MBps) [2024-11-05T03:32:52.441Z] Copying: 629/1024 [MB] (26 MBps) [2024-11-05T03:32:53.378Z] Copying: 657/1024 [MB] (27 MBps) [2024-11-05T03:32:54.315Z] Copying: 685/1024 [MB] (28 MBps) [2024-11-05T03:32:55.290Z] Copying: 714/1024 [MB] (28 MBps) [2024-11-05T03:32:56.226Z] Copying: 742/1024 [MB] (27 MBps) [2024-11-05T03:32:57.163Z] Copying: 769/1024 [MB] (27 MBps) [2024-11-05T03:32:58.099Z] Copying: 796/1024 [MB] (26 MBps) [2024-11-05T03:32:59.034Z] Copying: 823/1024 [MB] (26 MBps) [2024-11-05T03:33:00.409Z] Copying: 849/1024 [MB] (26 MBps) [2024-11-05T03:33:01.346Z] Copying: 876/1024 [MB] (26 MBps) [2024-11-05T03:33:02.283Z] Copying: 902/1024 [MB] (25 MBps) [2024-11-05T03:33:03.224Z] Copying: 927/1024 [MB] (25 MBps) [2024-11-05T03:33:04.177Z] Copying: 952/1024 [MB] (25 MBps) [2024-11-05T03:33:05.112Z] Copying: 979/1024 [MB] (26 MBps) [2024-11-05T03:33:05.680Z] Copying: 1006/1024 [MB] (26 MBps) [2024-11-05T03:33:05.941Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-05 03:33:05.699346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.357 [2024-11-05 03:33:05.699658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:42.357 [2024-11-05 03:33:05.699700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:42.357 [2024-11-05 03:33:05.699722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.357 [2024-11-05 03:33:05.699777] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:42.357 [2024-11-05 03:33:05.707443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.357 [2024-11-05 03:33:05.707494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:42.357 [2024-11-05 03:33:05.707524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.648 ms 00:22:42.357 [2024-11-05 03:33:05.707541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.357 [2024-11-05 03:33:05.707844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.357 [2024-11-05 03:33:05.707863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:42.357 [2024-11-05 03:33:05.707880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:22:42.357 [2024-11-05 03:33:05.707896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.357 [2024-11-05 03:33:05.712363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.357 [2024-11-05 03:33:05.712401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:42.357 [2024-11-05 03:33:05.712419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.450 ms 00:22:42.357 [2024-11-05 03:33:05.712443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.357 [2024-11-05 03:33:05.718697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.357 [2024-11-05 03:33:05.718736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:42.357 [2024-11-05 03:33:05.718751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.236 ms 00:22:42.357 [2024-11-05 03:33:05.718763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.357 [2024-11-05 03:33:05.756399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.357 [2024-11-05 03:33:05.756453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:42.357 [2024-11-05 03:33:05.756467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.595 ms 00:22:42.357 [2024-11-05 03:33:05.756477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.357 [2024-11-05 03:33:05.777562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.357 [2024-11-05 03:33:05.777603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:42.357 [2024-11-05 03:33:05.777617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.078 ms 00:22:42.357 [2024-11-05 03:33:05.777629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.357 [2024-11-05 03:33:05.777762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.357 [2024-11-05 03:33:05.777775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:42.357 [2024-11-05 03:33:05.777786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:42.357 [2024-11-05 03:33:05.777796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.357 [2024-11-05 03:33:05.813998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.357 [2024-11-05 03:33:05.814038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:42.357 [2024-11-05 03:33:05.814051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.244 ms 00:22:42.357 [2024-11-05 03:33:05.814061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.357 [2024-11-05 03:33:05.849786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.357 [2024-11-05 03:33:05.849837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:42.357 [2024-11-05 03:33:05.849850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.742 ms 00:22:42.357 [2024-11-05 03:33:05.849860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.357 [2024-11-05 03:33:05.885385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.357 [2024-11-05 03:33:05.885422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:42.357 [2024-11-05 03:33:05.885435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.544 ms 00:22:42.357 [2024-11-05 03:33:05.885444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.357 [2024-11-05 03:33:05.921783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.357 [2024-11-05 03:33:05.921819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:42.357 [2024-11-05 03:33:05.921831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.316 ms 00:22:42.357 [2024-11-05 03:33:05.921841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.357 [2024-11-05 03:33:05.921878] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:42.357 [2024-11-05 03:33:05.921902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.921917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.921928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.921940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.921951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.921962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.921973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.921984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.921994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:42.357 [2024-11-05 03:33:05.922177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:42.358 [2024-11-05 03:33:05.922974] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:42.358 [2024-11-05 03:33:05.922984] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8f0dba11-4f31-4028-aa3d-142ac12375d9 00:22:42.358 [2024-11-05 03:33:05.922995] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:42.358 [2024-11-05 03:33:05.923004] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:42.358 [2024-11-05 03:33:05.923014] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:42.359 [2024-11-05 03:33:05.923025] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:42.359 [2024-11-05 03:33:05.923035] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:42.359 [2024-11-05 03:33:05.923045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:42.359 [2024-11-05 03:33:05.923065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:42.359 [2024-11-05 03:33:05.923074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:42.359 [2024-11-05 03:33:05.923083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:42.359 [2024-11-05 03:33:05.923093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.359 [2024-11-05 03:33:05.923103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:42.359 [2024-11-05 03:33:05.923113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:22:42.359 [2024-11-05 03:33:05.923126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.618 [2024-11-05 03:33:05.942853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.618 [2024-11-05 03:33:05.942887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:42.618 [2024-11-05 03:33:05.942899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.705 ms 00:22:42.618 [2024-11-05 03:33:05.942909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.619 [2024-11-05 03:33:05.943408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.619 [2024-11-05 03:33:05.943427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:42.619 [2024-11-05 03:33:05.943443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:22:42.619 [2024-11-05 03:33:05.943453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.619 [2024-11-05 03:33:05.996564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.619 [2024-11-05 03:33:05.996602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:42.619 [2024-11-05 03:33:05.996616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.619 [2024-11-05 03:33:05.996627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.619 [2024-11-05 03:33:05.996683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.619 [2024-11-05 03:33:05.996694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:42.619 [2024-11-05 03:33:05.996710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.619 [2024-11-05 03:33:05.996720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.619 [2024-11-05 03:33:05.996801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.619 [2024-11-05 03:33:05.996815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:42.619 [2024-11-05 03:33:05.996826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.619 [2024-11-05 03:33:05.996836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.619 [2024-11-05 03:33:05.996853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.619 [2024-11-05 03:33:05.996863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:42.619 [2024-11-05 03:33:05.996873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.619 [2024-11-05 03:33:05.996888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.619 [2024-11-05 03:33:06.122277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.619 [2024-11-05 03:33:06.122342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:42.619 [2024-11-05 03:33:06.122357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.619 [2024-11-05 03:33:06.122368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.878 [2024-11-05 03:33:06.224556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.879 [2024-11-05 03:33:06.224611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:42.879 [2024-11-05 03:33:06.224625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.879 [2024-11-05 03:33:06.224642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.879 [2024-11-05 03:33:06.224733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.879 [2024-11-05 03:33:06.224746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:42.879 [2024-11-05 03:33:06.224757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.879 [2024-11-05 03:33:06.224767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.879 [2024-11-05 03:33:06.224816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.879 [2024-11-05 03:33:06.224828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:42.879 [2024-11-05 03:33:06.224838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.879 [2024-11-05 03:33:06.224847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.879 [2024-11-05 03:33:06.224960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.879 [2024-11-05 03:33:06.224974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:42.879 [2024-11-05 03:33:06.224984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.879 [2024-11-05 03:33:06.224994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.879 [2024-11-05 03:33:06.225030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.879 [2024-11-05 03:33:06.225043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:42.879 [2024-11-05 03:33:06.225053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.879 [2024-11-05 03:33:06.225063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.879 [2024-11-05 03:33:06.225105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.879 [2024-11-05 03:33:06.225116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:42.879 [2024-11-05 03:33:06.225127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.879 [2024-11-05 03:33:06.225136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.879 [2024-11-05 03:33:06.225176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.879 [2024-11-05 03:33:06.225188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:42.879 [2024-11-05 03:33:06.225198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.879 [2024-11-05 03:33:06.225208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.879 [2024-11-05 03:33:06.225357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 526.826 ms, result 0 00:22:43.815 00:22:43.815 00:22:43.815 03:33:07 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:45.717 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:45.717 03:33:09 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:45.717 [2024-11-05 03:33:09.137898] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:22:45.717 [2024-11-05 03:33:09.138183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77555 ] 00:22:45.717 [2024-11-05 03:33:09.299803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.975 [2024-11-05 03:33:09.414620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.236 [2024-11-05 03:33:09.766030] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:46.236 [2024-11-05 03:33:09.766093] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:46.496 [2024-11-05 03:33:09.926306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.496 [2024-11-05 03:33:09.926354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:46.496 [2024-11-05 03:33:09.926375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:46.496 [2024-11-05 03:33:09.926385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.496 [2024-11-05 03:33:09.926431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.496 [2024-11-05 03:33:09.926444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:46.496 [2024-11-05 03:33:09.926458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:46.496 [2024-11-05 03:33:09.926469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.496 [2024-11-05 03:33:09.926490] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:46.496 [2024-11-05 03:33:09.927432] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:46.496 [2024-11-05 03:33:09.927462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.496 [2024-11-05 03:33:09.927473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:46.496 [2024-11-05 03:33:09.927484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:22:46.496 [2024-11-05 03:33:09.927493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.496 [2024-11-05 03:33:09.928896] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:46.496 [2024-11-05 03:33:09.948112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.496 [2024-11-05 03:33:09.948150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:46.496 [2024-11-05 03:33:09.948164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.247 ms 00:22:46.496 [2024-11-05 03:33:09.948174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.496 [2024-11-05 03:33:09.948239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.496 [2024-11-05 03:33:09.948252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:46.496 [2024-11-05 03:33:09.948263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:46.496 [2024-11-05 03:33:09.948273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.496 [2024-11-05 03:33:09.954957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.496 [2024-11-05 03:33:09.954988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:46.496 [2024-11-05 03:33:09.955000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.613 ms 00:22:46.496 [2024-11-05 03:33:09.955010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.496 [2024-11-05 03:33:09.955093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.496 [2024-11-05 03:33:09.955107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:46.496 [2024-11-05 03:33:09.955118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:46.496 [2024-11-05 03:33:09.955128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.496 [2024-11-05 03:33:09.955168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.496 [2024-11-05 03:33:09.955180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:46.496 [2024-11-05 03:33:09.955190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:46.496 [2024-11-05 03:33:09.955201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.496 [2024-11-05 03:33:09.955224] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:46.496 [2024-11-05 03:33:09.959921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.496 [2024-11-05 03:33:09.959956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:46.496 [2024-11-05 03:33:09.959968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.710 ms 00:22:46.496 [2024-11-05 03:33:09.959981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.496 [2024-11-05 03:33:09.960012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.496 [2024-11-05 03:33:09.960023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:46.496 [2024-11-05 03:33:09.960034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:46.496 [2024-11-05 03:33:09.960044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.496 [2024-11-05 03:33:09.960098] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:46.496 [2024-11-05 03:33:09.960121] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:46.496 [2024-11-05 03:33:09.960157] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:46.496 [2024-11-05 03:33:09.960178] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:46.496 [2024-11-05 03:33:09.960267] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:46.496 [2024-11-05 03:33:09.960281] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:46.496 [2024-11-05 03:33:09.960307] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:46.496 [2024-11-05 03:33:09.960320] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:46.497 [2024-11-05 03:33:09.960333] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:46.497 [2024-11-05 03:33:09.960344] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:46.497 [2024-11-05 03:33:09.960354] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:46.497 [2024-11-05 03:33:09.960364] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:46.497 [2024-11-05 03:33:09.960374] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:46.497 [2024-11-05 03:33:09.960388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.497 [2024-11-05 03:33:09.960398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:46.497 [2024-11-05 03:33:09.960409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:22:46.497 [2024-11-05 03:33:09.960418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.497 [2024-11-05 03:33:09.960489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.497 [2024-11-05 03:33:09.960500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:46.497 [2024-11-05 03:33:09.960510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:46.497 [2024-11-05 03:33:09.960520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.497 [2024-11-05 03:33:09.960612] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:46.497 [2024-11-05 03:33:09.960638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:46.497 [2024-11-05 03:33:09.960649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:46.497 [2024-11-05 03:33:09.960659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.497 [2024-11-05 03:33:09.960670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:46.497 [2024-11-05 03:33:09.960679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:46.497 [2024-11-05 03:33:09.960689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:46.497 [2024-11-05 03:33:09.960698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:46.497 [2024-11-05 03:33:09.960707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:46.497 [2024-11-05 03:33:09.960716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:46.497 [2024-11-05 03:33:09.960728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:46.497 [2024-11-05 03:33:09.960737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:46.497 [2024-11-05 03:33:09.960746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:46.497 [2024-11-05 03:33:09.960755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:46.497 [2024-11-05 03:33:09.960764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:46.497 [2024-11-05 03:33:09.960783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.497 [2024-11-05 03:33:09.960792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:46.497 [2024-11-05 03:33:09.960802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:46.497 [2024-11-05 03:33:09.960811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.497 [2024-11-05 03:33:09.960820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:46.497 [2024-11-05 03:33:09.960830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:46.497 [2024-11-05 03:33:09.960839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.497 [2024-11-05 03:33:09.960848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:46.497 [2024-11-05 03:33:09.960858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:46.497 [2024-11-05 03:33:09.960867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.497 [2024-11-05 03:33:09.960876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:46.497 [2024-11-05 03:33:09.960885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:46.497 [2024-11-05 03:33:09.960894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.497 [2024-11-05 03:33:09.960902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:46.497 [2024-11-05 03:33:09.960911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:46.497 [2024-11-05 03:33:09.960920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.497 [2024-11-05 03:33:09.960929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:46.497 [2024-11-05 03:33:09.960938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:46.497 [2024-11-05 03:33:09.960947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:46.497 [2024-11-05 03:33:09.960956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:46.497 [2024-11-05 03:33:09.960966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:46.497 [2024-11-05 03:33:09.960975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:46.497 [2024-11-05 03:33:09.960984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:46.497 [2024-11-05 03:33:09.960993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:46.497 [2024-11-05 03:33:09.961002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.497 [2024-11-05 03:33:09.961011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:46.497 [2024-11-05 03:33:09.961020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:46.497 [2024-11-05 03:33:09.961029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.497 [2024-11-05 03:33:09.961039] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:46.497 [2024-11-05 03:33:09.961049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:46.497 [2024-11-05 03:33:09.961059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:46.497 [2024-11-05 03:33:09.961068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.497 [2024-11-05 03:33:09.961078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:46.497 [2024-11-05 03:33:09.961087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:46.497 [2024-11-05 03:33:09.961097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:46.497 [2024-11-05 03:33:09.961106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:46.497 [2024-11-05 03:33:09.961115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:46.497 [2024-11-05 03:33:09.961124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:46.497 [2024-11-05 03:33:09.961135] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:46.497 [2024-11-05 03:33:09.961147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:46.497 [2024-11-05 03:33:09.961158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:46.497 [2024-11-05 03:33:09.961169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:46.497 [2024-11-05 03:33:09.961179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:46.497 [2024-11-05 03:33:09.961189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:46.497 [2024-11-05 03:33:09.961200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:46.497 [2024-11-05 03:33:09.961210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:46.497 [2024-11-05 03:33:09.961220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:46.497 [2024-11-05 03:33:09.961230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:46.497 [2024-11-05 03:33:09.961240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:46.497 [2024-11-05 03:33:09.961251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:46.497 [2024-11-05 03:33:09.961261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:46.497 [2024-11-05 03:33:09.961271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:46.497 [2024-11-05 03:33:09.961281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:46.497 [2024-11-05 03:33:09.961307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:46.497 [2024-11-05 03:33:09.961318] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:46.497 [2024-11-05 03:33:09.961333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:46.497 [2024-11-05 03:33:09.961345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:46.498 [2024-11-05 03:33:09.961356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:46.498 [2024-11-05 03:33:09.961366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:46.498 [2024-11-05 03:33:09.961377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:46.498 [2024-11-05 03:33:09.961389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.498 [2024-11-05 03:33:09.961399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:46.498 [2024-11-05 03:33:09.961410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:22:46.498 [2024-11-05 03:33:09.961419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.498 [2024-11-05 03:33:09.999789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.498 [2024-11-05 03:33:09.999828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:46.498 [2024-11-05 03:33:09.999843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.387 ms 00:22:46.498 [2024-11-05 03:33:09.999853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.498 [2024-11-05 03:33:09.999934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.498 [2024-11-05 03:33:09.999946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:46.498 [2024-11-05 03:33:09.999956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:46.498 [2024-11-05 03:33:09.999967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.498 [2024-11-05 03:33:10.059712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.498 [2024-11-05 03:33:10.059754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:46.498 [2024-11-05 03:33:10.059768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.788 ms 00:22:46.498 [2024-11-05 03:33:10.059779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.498 [2024-11-05 03:33:10.059815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.498 [2024-11-05 03:33:10.059826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:46.498 [2024-11-05 03:33:10.059838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:46.498 [2024-11-05 03:33:10.059852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.498 [2024-11-05 03:33:10.060342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.498 [2024-11-05 03:33:10.060364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:46.498 [2024-11-05 03:33:10.060375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:22:46.498 [2024-11-05 03:33:10.060385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.498 [2024-11-05 03:33:10.060503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.498 [2024-11-05 03:33:10.060517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:46.498 [2024-11-05 03:33:10.060527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:22:46.498 [2024-11-05 03:33:10.060543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.757 [2024-11-05 03:33:10.079785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.757 [2024-11-05 03:33:10.079823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:46.757 [2024-11-05 03:33:10.079841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.254 ms 00:22:46.757 [2024-11-05 03:33:10.079852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.757 [2024-11-05 03:33:10.099478] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:46.757 [2024-11-05 03:33:10.099517] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:46.757 [2024-11-05 03:33:10.099532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.757 [2024-11-05 03:33:10.099543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:46.757 [2024-11-05 03:33:10.099554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.613 ms 00:22:46.757 [2024-11-05 03:33:10.099564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.757 [2024-11-05 03:33:10.129124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.757 [2024-11-05 03:33:10.129184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:46.757 [2024-11-05 03:33:10.129199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.565 ms 00:22:46.757 [2024-11-05 03:33:10.129211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.757 [2024-11-05 03:33:10.146802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.757 [2024-11-05 03:33:10.146839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:46.757 [2024-11-05 03:33:10.146853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.563 ms 00:22:46.757 [2024-11-05 03:33:10.146863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.757 [2024-11-05 03:33:10.164859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.757 [2024-11-05 03:33:10.164895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:46.757 [2024-11-05 03:33:10.164908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.988 ms 00:22:46.757 [2024-11-05 03:33:10.164917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.757 [2024-11-05 03:33:10.165661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.757 [2024-11-05 03:33:10.165695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:46.757 [2024-11-05 03:33:10.165707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:22:46.757 [2024-11-05 03:33:10.165721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.757 [2024-11-05 03:33:10.251691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.757 [2024-11-05 03:33:10.251757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:46.757 [2024-11-05 03:33:10.251780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.085 ms 00:22:46.757 [2024-11-05 03:33:10.251791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.757 [2024-11-05 03:33:10.262593] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:46.757 [2024-11-05 03:33:10.265185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.757 [2024-11-05 03:33:10.265217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:46.758 [2024-11-05 03:33:10.265230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.365 ms 00:22:46.758 [2024-11-05 03:33:10.265241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.758 [2024-11-05 03:33:10.265336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.758 [2024-11-05 03:33:10.265350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:46.758 [2024-11-05 03:33:10.265361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:46.758 [2024-11-05 03:33:10.265375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.758 [2024-11-05 03:33:10.265464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.758 [2024-11-05 03:33:10.265477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:46.758 [2024-11-05 03:33:10.265487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:46.758 [2024-11-05 03:33:10.265497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.758 [2024-11-05 03:33:10.265522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.758 [2024-11-05 03:33:10.265533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:46.758 [2024-11-05 03:33:10.265543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:46.758 [2024-11-05 03:33:10.265553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.758 [2024-11-05 03:33:10.265585] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:46.758 [2024-11-05 03:33:10.265599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.758 [2024-11-05 03:33:10.265609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:46.758 [2024-11-05 03:33:10.265619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:46.758 [2024-11-05 03:33:10.265629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.758 [2024-11-05 03:33:10.302047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.758 [2024-11-05 03:33:10.302086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:46.758 [2024-11-05 03:33:10.302099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.455 ms 00:22:46.758 [2024-11-05 03:33:10.302116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.758 [2024-11-05 03:33:10.302191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.758 [2024-11-05 03:33:10.302204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:46.758 [2024-11-05 03:33:10.302214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:46.758 [2024-11-05 03:33:10.302224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.758 [2024-11-05 03:33:10.303420] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 377.217 ms, result 0 00:22:48.151  [2024-11-05T03:33:12.313Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-05T03:33:13.690Z] Copying: 52/1024 [MB] (26 MBps) [2024-11-05T03:33:14.627Z] Copying: 78/1024 [MB] (26 MBps) [2024-11-05T03:33:15.564Z] Copying: 104/1024 [MB] (25 MBps) [2024-11-05T03:33:16.501Z] Copying: 130/1024 [MB] (25 MBps) [2024-11-05T03:33:17.437Z] Copying: 156/1024 [MB] (26 MBps) [2024-11-05T03:33:18.372Z] Copying: 181/1024 [MB] (25 MBps) [2024-11-05T03:33:19.309Z] Copying: 207/1024 [MB] (25 MBps) [2024-11-05T03:33:20.704Z] Copying: 232/1024 [MB] (25 MBps) [2024-11-05T03:33:21.641Z] Copying: 258/1024 [MB] (25 MBps) [2024-11-05T03:33:22.577Z] Copying: 284/1024 [MB] (26 MBps) [2024-11-05T03:33:23.512Z] Copying: 308/1024 [MB] (24 MBps) [2024-11-05T03:33:24.449Z] Copying: 333/1024 [MB] (24 MBps) [2024-11-05T03:33:25.384Z] Copying: 357/1024 [MB] (24 MBps) [2024-11-05T03:33:26.322Z] Copying: 381/1024 [MB] (23 MBps) [2024-11-05T03:33:27.699Z] Copying: 406/1024 [MB] (24 MBps) [2024-11-05T03:33:28.635Z] Copying: 430/1024 [MB] (24 MBps) [2024-11-05T03:33:29.604Z] Copying: 454/1024 [MB] (24 MBps) [2024-11-05T03:33:30.547Z] Copying: 479/1024 [MB] (25 MBps) [2024-11-05T03:33:31.484Z] Copying: 503/1024 [MB] (24 MBps) [2024-11-05T03:33:32.421Z] Copying: 526/1024 [MB] (22 MBps) [2024-11-05T03:33:33.357Z] Copying: 549/1024 [MB] (23 MBps) [2024-11-05T03:33:34.295Z] Copying: 573/1024 [MB] (23 MBps) [2024-11-05T03:33:35.673Z] Copying: 598/1024 [MB] (24 MBps) [2024-11-05T03:33:36.612Z] Copying: 622/1024 [MB] (24 MBps) [2024-11-05T03:33:37.589Z] Copying: 646/1024 [MB] (23 MBps) [2024-11-05T03:33:38.527Z] Copying: 672/1024 [MB] (25 MBps) [2024-11-05T03:33:39.466Z] Copying: 696/1024 [MB] (24 MBps) [2024-11-05T03:33:40.403Z] Copying: 720/1024 [MB] (23 MBps) [2024-11-05T03:33:41.341Z] Copying: 744/1024 [MB] (23 MBps) [2024-11-05T03:33:42.278Z] Copying: 768/1024 [MB] (24 MBps) [2024-11-05T03:33:43.655Z] Copying: 792/1024 [MB] (24 MBps) [2024-11-05T03:33:44.591Z] Copying: 816/1024 [MB] (24 MBps) [2024-11-05T03:33:45.527Z] Copying: 840/1024 [MB] (23 MBps) [2024-11-05T03:33:46.480Z] Copying: 864/1024 [MB] (24 MBps) [2024-11-05T03:33:47.415Z] Copying: 888/1024 [MB] (23 MBps) [2024-11-05T03:33:48.349Z] Copying: 912/1024 [MB] (24 MBps) [2024-11-05T03:33:49.285Z] Copying: 936/1024 [MB] (23 MBps) [2024-11-05T03:33:50.660Z] Copying: 960/1024 [MB] (24 MBps) [2024-11-05T03:33:51.595Z] Copying: 984/1024 [MB] (23 MBps) [2024-11-05T03:33:52.531Z] Copying: 1008/1024 [MB] (23 MBps) [2024-11-05T03:33:52.790Z] Copying: 1023/1024 [MB] (15 MBps) [2024-11-05T03:33:52.790Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-05 03:33:52.545830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.206 [2024-11-05 03:33:52.546069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:29.206 [2024-11-05 03:33:52.546108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:29.206 [2024-11-05 03:33:52.546133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.206 [2024-11-05 03:33:52.546946] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:29.206 [2024-11-05 03:33:52.552905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.206 [2024-11-05 03:33:52.552968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:29.206 [2024-11-05 03:33:52.552982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.933 ms 00:23:29.206 [2024-11-05 03:33:52.552993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.206 [2024-11-05 03:33:52.564684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.206 [2024-11-05 03:33:52.564726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:29.206 [2024-11-05 03:33:52.564741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.110 ms 00:23:29.206 [2024-11-05 03:33:52.564753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.206 [2024-11-05 03:33:52.588395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.206 [2024-11-05 03:33:52.588445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:29.206 [2024-11-05 03:33:52.588460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.655 ms 00:23:29.206 [2024-11-05 03:33:52.588471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.206 [2024-11-05 03:33:52.593463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.206 [2024-11-05 03:33:52.593497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:29.206 [2024-11-05 03:33:52.593509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.966 ms 00:23:29.206 [2024-11-05 03:33:52.593519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.206 [2024-11-05 03:33:52.629930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.206 [2024-11-05 03:33:52.629969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:29.206 [2024-11-05 03:33:52.629983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.421 ms 00:23:29.206 [2024-11-05 03:33:52.629994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.206 [2024-11-05 03:33:52.651053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.206 [2024-11-05 03:33:52.651098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:29.206 [2024-11-05 03:33:52.651113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.055 ms 00:23:29.206 [2024-11-05 03:33:52.651124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.206 [2024-11-05 03:33:52.756558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.206 [2024-11-05 03:33:52.756613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:29.206 [2024-11-05 03:33:52.756628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.563 ms 00:23:29.207 [2024-11-05 03:33:52.756640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.467 [2024-11-05 03:33:52.794077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.467 [2024-11-05 03:33:52.794114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:29.467 [2024-11-05 03:33:52.794128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.480 ms 00:23:29.467 [2024-11-05 03:33:52.794139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.467 [2024-11-05 03:33:52.830897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.467 [2024-11-05 03:33:52.830945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:29.467 [2024-11-05 03:33:52.830959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.780 ms 00:23:29.467 [2024-11-05 03:33:52.830969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.467 [2024-11-05 03:33:52.866624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.467 [2024-11-05 03:33:52.866672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:29.467 [2024-11-05 03:33:52.866692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.675 ms 00:23:29.467 [2024-11-05 03:33:52.866702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.467 [2024-11-05 03:33:52.903079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.467 [2024-11-05 03:33:52.903114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:29.467 [2024-11-05 03:33:52.903128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.362 ms 00:23:29.467 [2024-11-05 03:33:52.903138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.467 [2024-11-05 03:33:52.903175] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:29.467 [2024-11-05 03:33:52.903191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 94208 / 261120 wr_cnt: 1 state: open 00:23:29.467 [2024-11-05 03:33:52.903204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:29.467 [2024-11-05 03:33:52.903215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:29.468 [2024-11-05 03:33:52.903974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.903985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.903997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:29.469 [2024-11-05 03:33:52.904297] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:29.469 [2024-11-05 03:33:52.904308] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8f0dba11-4f31-4028-aa3d-142ac12375d9 00:23:29.469 [2024-11-05 03:33:52.904319] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 94208 00:23:29.469 [2024-11-05 03:33:52.904329] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 95168 00:23:29.469 [2024-11-05 03:33:52.904340] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 94208 00:23:29.469 [2024-11-05 03:33:52.904351] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0102 00:23:29.469 [2024-11-05 03:33:52.904361] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:29.469 [2024-11-05 03:33:52.904378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:29.469 [2024-11-05 03:33:52.904399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:29.469 [2024-11-05 03:33:52.904408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:29.469 [2024-11-05 03:33:52.904418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:29.469 [2024-11-05 03:33:52.904427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.469 [2024-11-05 03:33:52.904438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:29.469 [2024-11-05 03:33:52.904448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:23:29.469 [2024-11-05 03:33:52.904458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.469 [2024-11-05 03:33:52.924324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.469 [2024-11-05 03:33:52.924358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:29.469 [2024-11-05 03:33:52.924373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.865 ms 00:23:29.469 [2024-11-05 03:33:52.924389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.469 [2024-11-05 03:33:52.924962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.469 [2024-11-05 03:33:52.924985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:29.469 [2024-11-05 03:33:52.924996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:23:29.469 [2024-11-05 03:33:52.925013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.469 [2024-11-05 03:33:52.977539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.469 [2024-11-05 03:33:52.977577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:29.469 [2024-11-05 03:33:52.977594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.469 [2024-11-05 03:33:52.977605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.469 [2024-11-05 03:33:52.977663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.469 [2024-11-05 03:33:52.977675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:29.469 [2024-11-05 03:33:52.977686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.469 [2024-11-05 03:33:52.977696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.469 [2024-11-05 03:33:52.977758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.469 [2024-11-05 03:33:52.977771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:29.469 [2024-11-05 03:33:52.977783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.469 [2024-11-05 03:33:52.977798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.469 [2024-11-05 03:33:52.977814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.469 [2024-11-05 03:33:52.977825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:29.469 [2024-11-05 03:33:52.977835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.469 [2024-11-05 03:33:52.977845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.728 [2024-11-05 03:33:53.104026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.728 [2024-11-05 03:33:53.104079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:29.728 [2024-11-05 03:33:53.104101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.728 [2024-11-05 03:33:53.104112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.728 [2024-11-05 03:33:53.206496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.728 [2024-11-05 03:33:53.206549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:29.728 [2024-11-05 03:33:53.206565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.728 [2024-11-05 03:33:53.206576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.728 [2024-11-05 03:33:53.206677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.728 [2024-11-05 03:33:53.206699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:29.728 [2024-11-05 03:33:53.206710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.728 [2024-11-05 03:33:53.206720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.728 [2024-11-05 03:33:53.206775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.728 [2024-11-05 03:33:53.206788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:29.728 [2024-11-05 03:33:53.206798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.728 [2024-11-05 03:33:53.206808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.728 [2024-11-05 03:33:53.206912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.728 [2024-11-05 03:33:53.206926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:29.728 [2024-11-05 03:33:53.206938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.728 [2024-11-05 03:33:53.206949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.728 [2024-11-05 03:33:53.206993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.728 [2024-11-05 03:33:53.207005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:29.728 [2024-11-05 03:33:53.207016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.728 [2024-11-05 03:33:53.207026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.728 [2024-11-05 03:33:53.207065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.728 [2024-11-05 03:33:53.207078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:29.728 [2024-11-05 03:33:53.207089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.728 [2024-11-05 03:33:53.207099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.728 [2024-11-05 03:33:53.207147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.728 [2024-11-05 03:33:53.207160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:29.728 [2024-11-05 03:33:53.207172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.728 [2024-11-05 03:33:53.207183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.728 [2024-11-05 03:33:53.207347] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 664.278 ms, result 0 00:23:31.136 00:23:31.136 00:23:31.136 03:33:54 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:31.395 [2024-11-05 03:33:54.808062] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:23:31.395 [2024-11-05 03:33:54.808201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78018 ] 00:23:31.653 [2024-11-05 03:33:54.993140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.653 [2024-11-05 03:33:55.105300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.912 [2024-11-05 03:33:55.461820] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:31.912 [2024-11-05 03:33:55.461900] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:32.173 [2024-11-05 03:33:55.623577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.173 [2024-11-05 03:33:55.623631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:32.173 [2024-11-05 03:33:55.623653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:32.173 [2024-11-05 03:33:55.623664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.173 [2024-11-05 03:33:55.623711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.173 [2024-11-05 03:33:55.623723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:32.173 [2024-11-05 03:33:55.623738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:32.173 [2024-11-05 03:33:55.623747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.173 [2024-11-05 03:33:55.623770] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:32.173 [2024-11-05 03:33:55.624668] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:32.173 [2024-11-05 03:33:55.624698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.173 [2024-11-05 03:33:55.624710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:32.173 [2024-11-05 03:33:55.624722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:23:32.173 [2024-11-05 03:33:55.624732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.173 [2024-11-05 03:33:55.626204] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:32.173 [2024-11-05 03:33:55.644691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.173 [2024-11-05 03:33:55.644733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:32.173 [2024-11-05 03:33:55.644749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.518 ms 00:23:32.173 [2024-11-05 03:33:55.644760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.173 [2024-11-05 03:33:55.644825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.173 [2024-11-05 03:33:55.644839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:32.173 [2024-11-05 03:33:55.644851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:32.173 [2024-11-05 03:33:55.644862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.173 [2024-11-05 03:33:55.651586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.173 [2024-11-05 03:33:55.651617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:32.173 [2024-11-05 03:33:55.651630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.663 ms 00:23:32.173 [2024-11-05 03:33:55.651640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.173 [2024-11-05 03:33:55.651723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.173 [2024-11-05 03:33:55.651736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:32.173 [2024-11-05 03:33:55.651748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:32.173 [2024-11-05 03:33:55.651758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.173 [2024-11-05 03:33:55.651798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.173 [2024-11-05 03:33:55.651810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:32.173 [2024-11-05 03:33:55.651821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:32.173 [2024-11-05 03:33:55.651832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.173 [2024-11-05 03:33:55.651855] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:32.173 [2024-11-05 03:33:55.656586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.173 [2024-11-05 03:33:55.656621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:32.173 [2024-11-05 03:33:55.656634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.743 ms 00:23:32.173 [2024-11-05 03:33:55.656648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.173 [2024-11-05 03:33:55.656679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.173 [2024-11-05 03:33:55.656691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:32.173 [2024-11-05 03:33:55.656702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:32.173 [2024-11-05 03:33:55.656712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.173 [2024-11-05 03:33:55.656764] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:32.173 [2024-11-05 03:33:55.656789] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:32.173 [2024-11-05 03:33:55.656825] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:32.173 [2024-11-05 03:33:55.656845] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:32.173 [2024-11-05 03:33:55.656936] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:32.173 [2024-11-05 03:33:55.656951] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:32.173 [2024-11-05 03:33:55.656964] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:32.173 [2024-11-05 03:33:55.656977] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:32.173 [2024-11-05 03:33:55.656990] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:32.173 [2024-11-05 03:33:55.657002] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:32.173 [2024-11-05 03:33:55.657011] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:32.173 [2024-11-05 03:33:55.657022] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:32.173 [2024-11-05 03:33:55.657032] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:32.173 [2024-11-05 03:33:55.657047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.173 [2024-11-05 03:33:55.657058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:32.173 [2024-11-05 03:33:55.657069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:23:32.173 [2024-11-05 03:33:55.657079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.173 [2024-11-05 03:33:55.657150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.173 [2024-11-05 03:33:55.657162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:32.173 [2024-11-05 03:33:55.657173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:32.173 [2024-11-05 03:33:55.657183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.173 [2024-11-05 03:33:55.657278] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:32.173 [2024-11-05 03:33:55.657319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:32.173 [2024-11-05 03:33:55.657331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:32.173 [2024-11-05 03:33:55.657342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.173 [2024-11-05 03:33:55.657353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:32.173 [2024-11-05 03:33:55.657363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:32.173 [2024-11-05 03:33:55.657373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:32.173 [2024-11-05 03:33:55.657383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:32.173 [2024-11-05 03:33:55.657392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:32.173 [2024-11-05 03:33:55.657402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:32.173 [2024-11-05 03:33:55.657412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:32.173 [2024-11-05 03:33:55.657421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:32.173 [2024-11-05 03:33:55.657430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:32.173 [2024-11-05 03:33:55.657439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:32.173 [2024-11-05 03:33:55.657449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:32.173 [2024-11-05 03:33:55.657468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.173 [2024-11-05 03:33:55.657478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:32.173 [2024-11-05 03:33:55.657487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:32.173 [2024-11-05 03:33:55.657496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.173 [2024-11-05 03:33:55.657506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:32.173 [2024-11-05 03:33:55.657515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:32.173 [2024-11-05 03:33:55.657525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.173 [2024-11-05 03:33:55.657535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:32.173 [2024-11-05 03:33:55.657545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:32.173 [2024-11-05 03:33:55.657554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.173 [2024-11-05 03:33:55.657563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:32.173 [2024-11-05 03:33:55.657572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:32.173 [2024-11-05 03:33:55.657582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.173 [2024-11-05 03:33:55.657591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:32.173 [2024-11-05 03:33:55.657600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:32.173 [2024-11-05 03:33:55.657609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.173 [2024-11-05 03:33:55.657618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:32.173 [2024-11-05 03:33:55.657628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:32.174 [2024-11-05 03:33:55.657639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:32.174 [2024-11-05 03:33:55.657648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:32.174 [2024-11-05 03:33:55.657657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:32.174 [2024-11-05 03:33:55.657666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:32.174 [2024-11-05 03:33:55.657675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:32.174 [2024-11-05 03:33:55.657690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:32.174 [2024-11-05 03:33:55.657707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.174 [2024-11-05 03:33:55.657717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:32.174 [2024-11-05 03:33:55.657728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:32.174 [2024-11-05 03:33:55.657737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.174 [2024-11-05 03:33:55.657746] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:32.174 [2024-11-05 03:33:55.657757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:32.174 [2024-11-05 03:33:55.657767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:32.174 [2024-11-05 03:33:55.657777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.174 [2024-11-05 03:33:55.657787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:32.174 [2024-11-05 03:33:55.657797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:32.174 [2024-11-05 03:33:55.657806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:32.174 [2024-11-05 03:33:55.657816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:32.174 [2024-11-05 03:33:55.657825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:32.174 [2024-11-05 03:33:55.657835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:32.174 [2024-11-05 03:33:55.657846] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:32.174 [2024-11-05 03:33:55.657859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:32.174 [2024-11-05 03:33:55.657871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:32.174 [2024-11-05 03:33:55.657881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:32.174 [2024-11-05 03:33:55.657892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:32.174 [2024-11-05 03:33:55.657903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:32.174 [2024-11-05 03:33:55.657913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:32.174 [2024-11-05 03:33:55.657923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:32.174 [2024-11-05 03:33:55.657934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:32.174 [2024-11-05 03:33:55.657944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:32.174 [2024-11-05 03:33:55.657955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:32.174 [2024-11-05 03:33:55.657966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:32.174 [2024-11-05 03:33:55.657976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:32.174 [2024-11-05 03:33:55.657986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:32.174 [2024-11-05 03:33:55.657996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:32.174 [2024-11-05 03:33:55.658006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:32.174 [2024-11-05 03:33:55.658016] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:32.174 [2024-11-05 03:33:55.658030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:32.174 [2024-11-05 03:33:55.658042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:32.174 [2024-11-05 03:33:55.658053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:32.174 [2024-11-05 03:33:55.658066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:32.174 [2024-11-05 03:33:55.658077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:32.174 [2024-11-05 03:33:55.658088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.174 [2024-11-05 03:33:55.658099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:32.174 [2024-11-05 03:33:55.658109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:23:32.174 [2024-11-05 03:33:55.658120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.174 [2024-11-05 03:33:55.697217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.174 [2024-11-05 03:33:55.697255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:32.174 [2024-11-05 03:33:55.697269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.113 ms 00:23:32.174 [2024-11-05 03:33:55.697279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.174 [2024-11-05 03:33:55.697370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.174 [2024-11-05 03:33:55.697384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:32.174 [2024-11-05 03:33:55.697395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:32.174 [2024-11-05 03:33:55.697405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.774452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.774493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:32.434 [2024-11-05 03:33:55.774516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.114 ms 00:23:32.434 [2024-11-05 03:33:55.774529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.774571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.774584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:32.434 [2024-11-05 03:33:55.774596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:32.434 [2024-11-05 03:33:55.774611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.775100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.775124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:32.434 [2024-11-05 03:33:55.775136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:23:32.434 [2024-11-05 03:33:55.775147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.775266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.775301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:32.434 [2024-11-05 03:33:55.775313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:32.434 [2024-11-05 03:33:55.775330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.795177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.795214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:32.434 [2024-11-05 03:33:55.795231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.858 ms 00:23:32.434 [2024-11-05 03:33:55.795243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.814999] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:32.434 [2024-11-05 03:33:55.815038] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:32.434 [2024-11-05 03:33:55.815054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.815065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:32.434 [2024-11-05 03:33:55.815077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.727 ms 00:23:32.434 [2024-11-05 03:33:55.815087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.844831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.844878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:32.434 [2024-11-05 03:33:55.844894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.747 ms 00:23:32.434 [2024-11-05 03:33:55.844905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.863366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.863415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:32.434 [2024-11-05 03:33:55.863430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.446 ms 00:23:32.434 [2024-11-05 03:33:55.863440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.882117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.882167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:32.434 [2024-11-05 03:33:55.882182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.668 ms 00:23:32.434 [2024-11-05 03:33:55.882193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.882990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.883025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:32.434 [2024-11-05 03:33:55.883038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:23:32.434 [2024-11-05 03:33:55.883052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.970277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.970348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:32.434 [2024-11-05 03:33:55.970372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.343 ms 00:23:32.434 [2024-11-05 03:33:55.970384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.981193] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:32.434 [2024-11-05 03:33:55.984173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.984207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:32.434 [2024-11-05 03:33:55.984221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.760 ms 00:23:32.434 [2024-11-05 03:33:55.984232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.984330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.984344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:32.434 [2024-11-05 03:33:55.984356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:32.434 [2024-11-05 03:33:55.984371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.985821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.985876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:32.434 [2024-11-05 03:33:55.985889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:23:32.434 [2024-11-05 03:33:55.985900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.985937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.985950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:32.434 [2024-11-05 03:33:55.985961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:32.434 [2024-11-05 03:33:55.985971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.434 [2024-11-05 03:33:55.986010] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:32.434 [2024-11-05 03:33:55.986027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.434 [2024-11-05 03:33:55.986037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:32.434 [2024-11-05 03:33:55.986048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:32.434 [2024-11-05 03:33:55.986059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.694 [2024-11-05 03:33:56.023141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.694 [2024-11-05 03:33:56.023184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:32.694 [2024-11-05 03:33:56.023200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.122 ms 00:23:32.694 [2024-11-05 03:33:56.023217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.694 [2024-11-05 03:33:56.023307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.694 [2024-11-05 03:33:56.023320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:32.694 [2024-11-05 03:33:56.023333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:32.694 [2024-11-05 03:33:56.023343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.694 [2024-11-05 03:33:56.024546] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.089 ms, result 0 00:23:34.071  [2024-11-05T03:33:58.592Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-05T03:33:59.527Z] Copying: 43/1024 [MB] (24 MBps) [2024-11-05T03:34:00.464Z] Copying: 68/1024 [MB] (25 MBps) [2024-11-05T03:34:01.398Z] Copying: 94/1024 [MB] (25 MBps) [2024-11-05T03:34:02.333Z] Copying: 119/1024 [MB] (24 MBps) [2024-11-05T03:34:03.269Z] Copying: 144/1024 [MB] (25 MBps) [2024-11-05T03:34:04.646Z] Copying: 169/1024 [MB] (25 MBps) [2024-11-05T03:34:05.583Z] Copying: 194/1024 [MB] (24 MBps) [2024-11-05T03:34:06.519Z] Copying: 219/1024 [MB] (24 MBps) [2024-11-05T03:34:07.456Z] Copying: 243/1024 [MB] (24 MBps) [2024-11-05T03:34:08.393Z] Copying: 268/1024 [MB] (24 MBps) [2024-11-05T03:34:09.330Z] Copying: 292/1024 [MB] (24 MBps) [2024-11-05T03:34:10.265Z] Copying: 317/1024 [MB] (24 MBps) [2024-11-05T03:34:11.653Z] Copying: 341/1024 [MB] (23 MBps) [2024-11-05T03:34:12.588Z] Copying: 364/1024 [MB] (23 MBps) [2024-11-05T03:34:13.526Z] Copying: 387/1024 [MB] (22 MBps) [2024-11-05T03:34:14.464Z] Copying: 410/1024 [MB] (22 MBps) [2024-11-05T03:34:15.401Z] Copying: 433/1024 [MB] (23 MBps) [2024-11-05T03:34:16.339Z] Copying: 457/1024 [MB] (24 MBps) [2024-11-05T03:34:17.277Z] Copying: 482/1024 [MB] (24 MBps) [2024-11-05T03:34:18.655Z] Copying: 506/1024 [MB] (24 MBps) [2024-11-05T03:34:19.223Z] Copying: 531/1024 [MB] (24 MBps) [2024-11-05T03:34:20.633Z] Copying: 556/1024 [MB] (24 MBps) [2024-11-05T03:34:21.570Z] Copying: 581/1024 [MB] (24 MBps) [2024-11-05T03:34:22.509Z] Copying: 605/1024 [MB] (24 MBps) [2024-11-05T03:34:23.446Z] Copying: 630/1024 [MB] (25 MBps) [2024-11-05T03:34:24.383Z] Copying: 655/1024 [MB] (25 MBps) [2024-11-05T03:34:25.321Z] Copying: 680/1024 [MB] (24 MBps) [2024-11-05T03:34:26.259Z] Copying: 705/1024 [MB] (24 MBps) [2024-11-05T03:34:27.639Z] Copying: 730/1024 [MB] (24 MBps) [2024-11-05T03:34:28.207Z] Copying: 754/1024 [MB] (24 MBps) [2024-11-05T03:34:29.609Z] Copying: 779/1024 [MB] (24 MBps) [2024-11-05T03:34:30.543Z] Copying: 804/1024 [MB] (24 MBps) [2024-11-05T03:34:31.479Z] Copying: 829/1024 [MB] (24 MBps) [2024-11-05T03:34:32.415Z] Copying: 854/1024 [MB] (24 MBps) [2024-11-05T03:34:33.352Z] Copying: 879/1024 [MB] (25 MBps) [2024-11-05T03:34:34.288Z] Copying: 904/1024 [MB] (25 MBps) [2024-11-05T03:34:35.224Z] Copying: 932/1024 [MB] (27 MBps) [2024-11-05T03:34:36.600Z] Copying: 958/1024 [MB] (26 MBps) [2024-11-05T03:34:37.562Z] Copying: 985/1024 [MB] (26 MBps) [2024-11-05T03:34:37.562Z] Copying: 1014/1024 [MB] (28 MBps) [2024-11-05T03:34:37.562Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-05 03:34:37.546884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.978 [2024-11-05 03:34:37.546945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:13.978 [2024-11-05 03:34:37.546964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:13.978 [2024-11-05 03:34:37.546975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.978 [2024-11-05 03:34:37.547006] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:13.978 [2024-11-05 03:34:37.551594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.978 [2024-11-05 03:34:37.551633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:13.978 [2024-11-05 03:34:37.551646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.579 ms 00:24:13.978 [2024-11-05 03:34:37.551656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.978 [2024-11-05 03:34:37.551847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.978 [2024-11-05 03:34:37.551860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:13.978 [2024-11-05 03:34:37.551871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:24:13.979 [2024-11-05 03:34:37.551881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.979 [2024-11-05 03:34:37.557444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.979 [2024-11-05 03:34:37.557485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:13.979 [2024-11-05 03:34:37.557498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.550 ms 00:24:13.979 [2024-11-05 03:34:37.557509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.237 [2024-11-05 03:34:37.563084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.237 [2024-11-05 03:34:37.563119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:14.237 [2024-11-05 03:34:37.563132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.547 ms 00:24:14.237 [2024-11-05 03:34:37.563142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.237 [2024-11-05 03:34:37.601581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.237 [2024-11-05 03:34:37.601622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:14.237 [2024-11-05 03:34:37.601637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.450 ms 00:24:14.237 [2024-11-05 03:34:37.601648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.237 [2024-11-05 03:34:37.623029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.237 [2024-11-05 03:34:37.623077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:14.237 [2024-11-05 03:34:37.623091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.377 ms 00:24:14.237 [2024-11-05 03:34:37.623103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.237 [2024-11-05 03:34:37.753199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.237 [2024-11-05 03:34:37.753253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:14.237 [2024-11-05 03:34:37.753267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 130.263 ms 00:24:14.237 [2024-11-05 03:34:37.753278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.237 [2024-11-05 03:34:37.790601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.237 [2024-11-05 03:34:37.790640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:14.237 [2024-11-05 03:34:37.790654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.354 ms 00:24:14.237 [2024-11-05 03:34:37.790665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.498 [2024-11-05 03:34:37.827730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.498 [2024-11-05 03:34:37.827767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:14.498 [2024-11-05 03:34:37.827794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.080 ms 00:24:14.498 [2024-11-05 03:34:37.827804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.498 [2024-11-05 03:34:37.863162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.498 [2024-11-05 03:34:37.863199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:14.498 [2024-11-05 03:34:37.863213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.377 ms 00:24:14.498 [2024-11-05 03:34:37.863223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.498 [2024-11-05 03:34:37.898605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.498 [2024-11-05 03:34:37.898653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:14.498 [2024-11-05 03:34:37.898667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.351 ms 00:24:14.498 [2024-11-05 03:34:37.898677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.498 [2024-11-05 03:34:37.898719] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:14.498 [2024-11-05 03:34:37.898736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:14.498 [2024-11-05 03:34:37.898749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:14.498 [2024-11-05 03:34:37.898761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:14.498 [2024-11-05 03:34:37.898772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:14.498 [2024-11-05 03:34:37.898783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:14.498 [2024-11-05 03:34:37.898794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:14.498 [2024-11-05 03:34:37.898805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:14.498 [2024-11-05 03:34:37.898815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:14.498 [2024-11-05 03:34:37.898826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:14.498 [2024-11-05 03:34:37.898837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:14.498 [2024-11-05 03:34:37.898847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:14.498 [2024-11-05 03:34:37.898858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:14.498 [2024-11-05 03:34:37.898869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:14.498 [2024-11-05 03:34:37.898880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.898891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.898902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.898912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.898923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.898933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.898944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.898954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.898965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.898975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.898985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.898996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:14.499 [2024-11-05 03:34:37.899816] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:14.499 [2024-11-05 03:34:37.899826] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8f0dba11-4f31-4028-aa3d-142ac12375d9 00:24:14.499 [2024-11-05 03:34:37.899838] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:14.499 [2024-11-05 03:34:37.899848] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 37824 00:24:14.499 [2024-11-05 03:34:37.899859] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 36864 00:24:14.500 [2024-11-05 03:34:37.899869] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0260 00:24:14.500 [2024-11-05 03:34:37.899879] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:14.500 [2024-11-05 03:34:37.899896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:14.500 [2024-11-05 03:34:37.899906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:14.500 [2024-11-05 03:34:37.899925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:14.500 [2024-11-05 03:34:37.899934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:14.500 [2024-11-05 03:34:37.899944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.500 [2024-11-05 03:34:37.899955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:14.500 [2024-11-05 03:34:37.899965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:24:14.500 [2024-11-05 03:34:37.899975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.500 [2024-11-05 03:34:37.919720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.500 [2024-11-05 03:34:37.919755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:14.500 [2024-11-05 03:34:37.919768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.743 ms 00:24:14.500 [2024-11-05 03:34:37.919784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.500 [2024-11-05 03:34:37.920336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.500 [2024-11-05 03:34:37.920357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:14.500 [2024-11-05 03:34:37.920368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:24:14.500 [2024-11-05 03:34:37.920378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.500 [2024-11-05 03:34:37.972231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.500 [2024-11-05 03:34:37.972269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:14.500 [2024-11-05 03:34:37.972295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.500 [2024-11-05 03:34:37.972307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.500 [2024-11-05 03:34:37.972360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.500 [2024-11-05 03:34:37.972371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:14.500 [2024-11-05 03:34:37.972381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.500 [2024-11-05 03:34:37.972391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.500 [2024-11-05 03:34:37.972453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.500 [2024-11-05 03:34:37.972466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:14.500 [2024-11-05 03:34:37.972477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.500 [2024-11-05 03:34:37.972492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.500 [2024-11-05 03:34:37.972509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.500 [2024-11-05 03:34:37.972519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:14.500 [2024-11-05 03:34:37.972530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.500 [2024-11-05 03:34:37.972548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.760 [2024-11-05 03:34:38.098124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.760 [2024-11-05 03:34:38.098177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:14.760 [2024-11-05 03:34:38.098200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.760 [2024-11-05 03:34:38.098211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.760 [2024-11-05 03:34:38.198822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.760 [2024-11-05 03:34:38.198873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:14.760 [2024-11-05 03:34:38.198890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.760 [2024-11-05 03:34:38.198900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.760 [2024-11-05 03:34:38.199002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.760 [2024-11-05 03:34:38.199014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:14.760 [2024-11-05 03:34:38.199025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.760 [2024-11-05 03:34:38.199036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.760 [2024-11-05 03:34:38.199093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.760 [2024-11-05 03:34:38.199104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:14.760 [2024-11-05 03:34:38.199115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.760 [2024-11-05 03:34:38.199125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.760 [2024-11-05 03:34:38.199235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.760 [2024-11-05 03:34:38.199248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:14.760 [2024-11-05 03:34:38.199259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.760 [2024-11-05 03:34:38.199269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.760 [2024-11-05 03:34:38.199330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.760 [2024-11-05 03:34:38.199343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:14.760 [2024-11-05 03:34:38.199354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.760 [2024-11-05 03:34:38.199363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.760 [2024-11-05 03:34:38.199401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.760 [2024-11-05 03:34:38.199412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:14.760 [2024-11-05 03:34:38.199423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.760 [2024-11-05 03:34:38.199432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.760 [2024-11-05 03:34:38.199481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.760 [2024-11-05 03:34:38.199494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:14.760 [2024-11-05 03:34:38.199504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.760 [2024-11-05 03:34:38.199514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.760 [2024-11-05 03:34:38.199662] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 653.784 ms, result 0 00:24:15.698 00:24:15.698 00:24:15.698 03:34:39 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:17.602 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:17.602 03:34:41 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:17.602 03:34:41 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:17.602 03:34:41 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:17.602 03:34:41 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:17.602 03:34:41 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:17.602 03:34:41 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76406 00:24:17.602 03:34:41 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 76406 ']' 00:24:17.602 03:34:41 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 76406 00:24:17.602 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (76406) - No such process 00:24:17.602 Process with pid 76406 is not found 00:24:17.602 03:34:41 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 76406 is not found' 00:24:17.602 03:34:41 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:17.602 Remove shared memory files 00:24:17.602 03:34:41 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:17.602 03:34:41 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:17.602 03:34:41 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:17.602 03:34:41 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:17.862 03:34:41 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:17.862 03:34:41 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:17.862 ************************************ 00:24:17.862 END TEST ftl_restore 00:24:17.862 ************************************ 00:24:17.862 00:24:17.862 real 3m23.490s 00:24:17.862 user 3m10.756s 00:24:17.862 sys 0m14.206s 00:24:17.862 03:34:41 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:17.862 03:34:41 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:17.862 03:34:41 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:17.862 03:34:41 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:24:17.862 03:34:41 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:17.862 03:34:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:17.862 ************************************ 00:24:17.862 START TEST ftl_dirty_shutdown 00:24:17.862 ************************************ 00:24:17.862 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:17.862 * Looking for test storage... 00:24:17.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:17.862 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:17.862 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:24:17.862 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:18.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.122 --rc genhtml_branch_coverage=1 00:24:18.122 --rc genhtml_function_coverage=1 00:24:18.122 --rc genhtml_legend=1 00:24:18.122 --rc geninfo_all_blocks=1 00:24:18.122 --rc geninfo_unexecuted_blocks=1 00:24:18.122 00:24:18.122 ' 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:18.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.122 --rc genhtml_branch_coverage=1 00:24:18.122 --rc genhtml_function_coverage=1 00:24:18.122 --rc genhtml_legend=1 00:24:18.122 --rc geninfo_all_blocks=1 00:24:18.122 --rc geninfo_unexecuted_blocks=1 00:24:18.122 00:24:18.122 ' 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:18.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.122 --rc genhtml_branch_coverage=1 00:24:18.122 --rc genhtml_function_coverage=1 00:24:18.122 --rc genhtml_legend=1 00:24:18.122 --rc geninfo_all_blocks=1 00:24:18.122 --rc geninfo_unexecuted_blocks=1 00:24:18.122 00:24:18.122 ' 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:18.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.122 --rc genhtml_branch_coverage=1 00:24:18.122 --rc genhtml_function_coverage=1 00:24:18.122 --rc genhtml_legend=1 00:24:18.122 --rc geninfo_all_blocks=1 00:24:18.122 --rc geninfo_unexecuted_blocks=1 00:24:18.122 00:24:18.122 ' 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:18.122 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78552 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78552 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 78552 ']' 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:18.123 03:34:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:18.123 [2024-11-05 03:34:41.654242] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:24:18.123 [2024-11-05 03:34:41.654508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78552 ] 00:24:18.382 [2024-11-05 03:34:41.836744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.382 [2024-11-05 03:34:41.950154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.320 03:34:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:19.320 03:34:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:24:19.320 03:34:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:19.320 03:34:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:19.320 03:34:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:19.320 03:34:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:19.320 03:34:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:19.320 03:34:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:19.579 03:34:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:19.579 03:34:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:19.579 03:34:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:19.579 03:34:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:24:19.579 03:34:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:19.579 03:34:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:19.579 03:34:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:19.579 03:34:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:19.838 03:34:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:19.838 { 00:24:19.838 "name": "nvme0n1", 00:24:19.838 "aliases": [ 00:24:19.838 "8ffb4af1-3812-4119-8995-093d3c12e83d" 00:24:19.838 ], 00:24:19.838 "product_name": "NVMe disk", 00:24:19.838 "block_size": 4096, 00:24:19.838 "num_blocks": 1310720, 00:24:19.838 "uuid": "8ffb4af1-3812-4119-8995-093d3c12e83d", 00:24:19.838 "numa_id": -1, 00:24:19.838 "assigned_rate_limits": { 00:24:19.838 "rw_ios_per_sec": 0, 00:24:19.838 "rw_mbytes_per_sec": 0, 00:24:19.838 "r_mbytes_per_sec": 0, 00:24:19.838 "w_mbytes_per_sec": 0 00:24:19.838 }, 00:24:19.838 "claimed": true, 00:24:19.838 "claim_type": "read_many_write_one", 00:24:19.838 "zoned": false, 00:24:19.838 "supported_io_types": { 00:24:19.838 "read": true, 00:24:19.838 "write": true, 00:24:19.838 "unmap": true, 00:24:19.838 "flush": true, 00:24:19.838 "reset": true, 00:24:19.838 "nvme_admin": true, 00:24:19.838 "nvme_io": true, 00:24:19.838 "nvme_io_md": false, 00:24:19.838 "write_zeroes": true, 00:24:19.838 "zcopy": false, 00:24:19.838 "get_zone_info": false, 00:24:19.838 "zone_management": false, 00:24:19.838 "zone_append": false, 00:24:19.838 "compare": true, 00:24:19.838 "compare_and_write": false, 00:24:19.838 "abort": true, 00:24:19.838 "seek_hole": false, 00:24:19.838 "seek_data": false, 00:24:19.838 "copy": true, 00:24:19.838 "nvme_iov_md": false 00:24:19.838 }, 00:24:19.838 "driver_specific": { 00:24:19.838 "nvme": [ 00:24:19.838 { 00:24:19.838 "pci_address": "0000:00:11.0", 00:24:19.838 "trid": { 00:24:19.838 "trtype": "PCIe", 00:24:19.838 "traddr": "0000:00:11.0" 00:24:19.838 }, 00:24:19.838 "ctrlr_data": { 00:24:19.838 "cntlid": 0, 00:24:19.838 "vendor_id": "0x1b36", 00:24:19.838 "model_number": "QEMU NVMe Ctrl", 00:24:19.838 "serial_number": "12341", 00:24:19.838 "firmware_revision": "8.0.0", 00:24:19.838 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:19.838 "oacs": { 00:24:19.838 "security": 0, 00:24:19.838 "format": 1, 00:24:19.838 "firmware": 0, 00:24:19.838 "ns_manage": 1 00:24:19.838 }, 00:24:19.838 "multi_ctrlr": false, 00:24:19.838 "ana_reporting": false 00:24:19.838 }, 00:24:19.838 "vs": { 00:24:19.838 "nvme_version": "1.4" 00:24:19.838 }, 00:24:19.838 "ns_data": { 00:24:19.838 "id": 1, 00:24:19.838 "can_share": false 00:24:19.838 } 00:24:19.838 } 00:24:19.838 ], 00:24:19.838 "mp_policy": "active_passive" 00:24:19.838 } 00:24:19.838 } 00:24:19.838 ]' 00:24:19.838 03:34:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:19.838 03:34:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:19.838 03:34:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:19.839 03:34:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:24:19.839 03:34:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:24:19.839 03:34:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:24:19.839 03:34:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:20.098 03:34:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:20.098 03:34:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:20.098 03:34:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:20.098 03:34:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:20.098 03:34:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=567a822b-1ff2-41b5-b37f-d1d69a7de89b 00:24:20.098 03:34:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:20.098 03:34:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 567a822b-1ff2-41b5-b37f-d1d69a7de89b 00:24:20.357 03:34:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:20.615 03:34:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=7a5c4c8d-369a-4c8a-8e81-4d6439ff05b5 00:24:20.615 03:34:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7a5c4c8d-369a-4c8a-8e81-4d6439ff05b5 00:24:20.874 03:34:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=7258cc04-48a0-4e9d-ab23-cb1ee66b1979 00:24:20.874 03:34:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:20.874 03:34:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7258cc04-48a0-4e9d-ab23-cb1ee66b1979 00:24:20.874 03:34:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:20.874 03:34:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:20.874 03:34:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=7258cc04-48a0-4e9d-ab23-cb1ee66b1979 00:24:20.874 03:34:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:20.874 03:34:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 7258cc04-48a0-4e9d-ab23-cb1ee66b1979 00:24:20.874 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=7258cc04-48a0-4e9d-ab23-cb1ee66b1979 00:24:20.874 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:20.874 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:20.874 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:20.874 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7258cc04-48a0-4e9d-ab23-cb1ee66b1979 00:24:21.133 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:21.133 { 00:24:21.133 "name": "7258cc04-48a0-4e9d-ab23-cb1ee66b1979", 00:24:21.133 "aliases": [ 00:24:21.133 "lvs/nvme0n1p0" 00:24:21.133 ], 00:24:21.133 "product_name": "Logical Volume", 00:24:21.133 "block_size": 4096, 00:24:21.133 "num_blocks": 26476544, 00:24:21.133 "uuid": "7258cc04-48a0-4e9d-ab23-cb1ee66b1979", 00:24:21.133 "assigned_rate_limits": { 00:24:21.133 "rw_ios_per_sec": 0, 00:24:21.133 "rw_mbytes_per_sec": 0, 00:24:21.133 "r_mbytes_per_sec": 0, 00:24:21.133 "w_mbytes_per_sec": 0 00:24:21.133 }, 00:24:21.133 "claimed": false, 00:24:21.133 "zoned": false, 00:24:21.133 "supported_io_types": { 00:24:21.133 "read": true, 00:24:21.133 "write": true, 00:24:21.133 "unmap": true, 00:24:21.133 "flush": false, 00:24:21.133 "reset": true, 00:24:21.133 "nvme_admin": false, 00:24:21.133 "nvme_io": false, 00:24:21.133 "nvme_io_md": false, 00:24:21.133 "write_zeroes": true, 00:24:21.133 "zcopy": false, 00:24:21.133 "get_zone_info": false, 00:24:21.133 "zone_management": false, 00:24:21.133 "zone_append": false, 00:24:21.133 "compare": false, 00:24:21.133 "compare_and_write": false, 00:24:21.133 "abort": false, 00:24:21.133 "seek_hole": true, 00:24:21.133 "seek_data": true, 00:24:21.133 "copy": false, 00:24:21.133 "nvme_iov_md": false 00:24:21.133 }, 00:24:21.133 "driver_specific": { 00:24:21.133 "lvol": { 00:24:21.133 "lvol_store_uuid": "7a5c4c8d-369a-4c8a-8e81-4d6439ff05b5", 00:24:21.133 "base_bdev": "nvme0n1", 00:24:21.133 "thin_provision": true, 00:24:21.133 "num_allocated_clusters": 0, 00:24:21.133 "snapshot": false, 00:24:21.133 "clone": false, 00:24:21.133 "esnap_clone": false 00:24:21.133 } 00:24:21.133 } 00:24:21.133 } 00:24:21.133 ]' 00:24:21.133 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:21.133 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:21.133 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:21.133 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:21.133 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:21.133 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:24:21.133 03:34:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:21.133 03:34:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:21.133 03:34:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:21.393 03:34:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:21.393 03:34:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:21.393 03:34:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 7258cc04-48a0-4e9d-ab23-cb1ee66b1979 00:24:21.393 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=7258cc04-48a0-4e9d-ab23-cb1ee66b1979 00:24:21.393 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:21.393 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:21.393 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:21.393 03:34:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7258cc04-48a0-4e9d-ab23-cb1ee66b1979 00:24:21.678 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:21.678 { 00:24:21.678 "name": "7258cc04-48a0-4e9d-ab23-cb1ee66b1979", 00:24:21.678 "aliases": [ 00:24:21.678 "lvs/nvme0n1p0" 00:24:21.678 ], 00:24:21.678 "product_name": "Logical Volume", 00:24:21.678 "block_size": 4096, 00:24:21.678 "num_blocks": 26476544, 00:24:21.678 "uuid": "7258cc04-48a0-4e9d-ab23-cb1ee66b1979", 00:24:21.678 "assigned_rate_limits": { 00:24:21.678 "rw_ios_per_sec": 0, 00:24:21.678 "rw_mbytes_per_sec": 0, 00:24:21.678 "r_mbytes_per_sec": 0, 00:24:21.678 "w_mbytes_per_sec": 0 00:24:21.678 }, 00:24:21.678 "claimed": false, 00:24:21.678 "zoned": false, 00:24:21.678 "supported_io_types": { 00:24:21.678 "read": true, 00:24:21.678 "write": true, 00:24:21.678 "unmap": true, 00:24:21.678 "flush": false, 00:24:21.678 "reset": true, 00:24:21.678 "nvme_admin": false, 00:24:21.678 "nvme_io": false, 00:24:21.678 "nvme_io_md": false, 00:24:21.678 "write_zeroes": true, 00:24:21.678 "zcopy": false, 00:24:21.678 "get_zone_info": false, 00:24:21.678 "zone_management": false, 00:24:21.678 "zone_append": false, 00:24:21.678 "compare": false, 00:24:21.678 "compare_and_write": false, 00:24:21.678 "abort": false, 00:24:21.678 "seek_hole": true, 00:24:21.678 "seek_data": true, 00:24:21.678 "copy": false, 00:24:21.678 "nvme_iov_md": false 00:24:21.678 }, 00:24:21.678 "driver_specific": { 00:24:21.678 "lvol": { 00:24:21.678 "lvol_store_uuid": "7a5c4c8d-369a-4c8a-8e81-4d6439ff05b5", 00:24:21.678 "base_bdev": "nvme0n1", 00:24:21.678 "thin_provision": true, 00:24:21.678 "num_allocated_clusters": 0, 00:24:21.678 "snapshot": false, 00:24:21.678 "clone": false, 00:24:21.678 "esnap_clone": false 00:24:21.678 } 00:24:21.678 } 00:24:21.678 } 00:24:21.678 ]' 00:24:21.678 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:21.678 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:21.678 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:21.937 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:21.937 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:21.937 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:24:21.937 03:34:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:21.937 03:34:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:21.937 03:34:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:22.195 03:34:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 7258cc04-48a0-4e9d-ab23-cb1ee66b1979 00:24:22.195 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=7258cc04-48a0-4e9d-ab23-cb1ee66b1979 00:24:22.195 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:22.195 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:22.195 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:22.195 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7258cc04-48a0-4e9d-ab23-cb1ee66b1979 00:24:22.195 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:22.195 { 00:24:22.195 "name": "7258cc04-48a0-4e9d-ab23-cb1ee66b1979", 00:24:22.195 "aliases": [ 00:24:22.195 "lvs/nvme0n1p0" 00:24:22.195 ], 00:24:22.195 "product_name": "Logical Volume", 00:24:22.195 "block_size": 4096, 00:24:22.195 "num_blocks": 26476544, 00:24:22.195 "uuid": "7258cc04-48a0-4e9d-ab23-cb1ee66b1979", 00:24:22.195 "assigned_rate_limits": { 00:24:22.195 "rw_ios_per_sec": 0, 00:24:22.195 "rw_mbytes_per_sec": 0, 00:24:22.195 "r_mbytes_per_sec": 0, 00:24:22.195 "w_mbytes_per_sec": 0 00:24:22.195 }, 00:24:22.195 "claimed": false, 00:24:22.195 "zoned": false, 00:24:22.195 "supported_io_types": { 00:24:22.195 "read": true, 00:24:22.195 "write": true, 00:24:22.195 "unmap": true, 00:24:22.195 "flush": false, 00:24:22.195 "reset": true, 00:24:22.195 "nvme_admin": false, 00:24:22.195 "nvme_io": false, 00:24:22.195 "nvme_io_md": false, 00:24:22.195 "write_zeroes": true, 00:24:22.195 "zcopy": false, 00:24:22.195 "get_zone_info": false, 00:24:22.195 "zone_management": false, 00:24:22.195 "zone_append": false, 00:24:22.195 "compare": false, 00:24:22.195 "compare_and_write": false, 00:24:22.195 "abort": false, 00:24:22.195 "seek_hole": true, 00:24:22.195 "seek_data": true, 00:24:22.195 "copy": false, 00:24:22.195 "nvme_iov_md": false 00:24:22.196 }, 00:24:22.196 "driver_specific": { 00:24:22.196 "lvol": { 00:24:22.196 "lvol_store_uuid": "7a5c4c8d-369a-4c8a-8e81-4d6439ff05b5", 00:24:22.196 "base_bdev": "nvme0n1", 00:24:22.196 "thin_provision": true, 00:24:22.196 "num_allocated_clusters": 0, 00:24:22.196 "snapshot": false, 00:24:22.196 "clone": false, 00:24:22.196 "esnap_clone": false 00:24:22.196 } 00:24:22.196 } 00:24:22.196 } 00:24:22.196 ]' 00:24:22.196 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:22.196 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:22.455 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:22.455 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:22.455 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:22.455 03:34:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:24:22.455 03:34:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:22.455 03:34:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7258cc04-48a0-4e9d-ab23-cb1ee66b1979 --l2p_dram_limit 10' 00:24:22.455 03:34:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:22.455 03:34:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:22.455 03:34:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:22.455 03:34:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7258cc04-48a0-4e9d-ab23-cb1ee66b1979 --l2p_dram_limit 10 -c nvc0n1p0 00:24:22.714 [2024-11-05 03:34:46.154750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.714 [2024-11-05 03:34:46.154953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:22.714 [2024-11-05 03:34:46.154983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:22.714 [2024-11-05 03:34:46.154996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.715 [2024-11-05 03:34:46.155100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.715 [2024-11-05 03:34:46.155113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.715 [2024-11-05 03:34:46.155128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:22.715 [2024-11-05 03:34:46.155139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.715 [2024-11-05 03:34:46.155171] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:22.715 [2024-11-05 03:34:46.156243] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:22.715 [2024-11-05 03:34:46.156271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.715 [2024-11-05 03:34:46.156282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.715 [2024-11-05 03:34:46.156306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:24:22.715 [2024-11-05 03:34:46.156316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.715 [2024-11-05 03:34:46.156401] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8afb070d-d72c-4e93-9c89-06087d4a9cf7 00:24:22.715 [2024-11-05 03:34:46.157904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.715 [2024-11-05 03:34:46.157937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:22.715 [2024-11-05 03:34:46.157950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:22.715 [2024-11-05 03:34:46.157963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.715 [2024-11-05 03:34:46.165694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.715 [2024-11-05 03:34:46.165829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.715 [2024-11-05 03:34:46.165915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.700 ms 00:24:22.715 [2024-11-05 03:34:46.165955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.715 [2024-11-05 03:34:46.166086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.715 [2024-11-05 03:34:46.166126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.715 [2024-11-05 03:34:46.166218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:22.715 [2024-11-05 03:34:46.166242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.715 [2024-11-05 03:34:46.166356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.715 [2024-11-05 03:34:46.166374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:22.715 [2024-11-05 03:34:46.166385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:22.715 [2024-11-05 03:34:46.166401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.715 [2024-11-05 03:34:46.166428] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:22.715 [2024-11-05 03:34:46.171536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.715 [2024-11-05 03:34:46.171570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.715 [2024-11-05 03:34:46.171587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.121 ms 00:24:22.715 [2024-11-05 03:34:46.171598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.715 [2024-11-05 03:34:46.171633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.715 [2024-11-05 03:34:46.171644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:22.715 [2024-11-05 03:34:46.171657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:22.715 [2024-11-05 03:34:46.171668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.715 [2024-11-05 03:34:46.171705] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:22.715 [2024-11-05 03:34:46.171833] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:22.715 [2024-11-05 03:34:46.171853] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:22.715 [2024-11-05 03:34:46.171867] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:22.715 [2024-11-05 03:34:46.171883] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:22.715 [2024-11-05 03:34:46.171895] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:22.715 [2024-11-05 03:34:46.171909] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:22.715 [2024-11-05 03:34:46.171919] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:22.715 [2024-11-05 03:34:46.171935] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:22.715 [2024-11-05 03:34:46.171945] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:22.715 [2024-11-05 03:34:46.171959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.715 [2024-11-05 03:34:46.171969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:22.715 [2024-11-05 03:34:46.171982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:24:22.715 [2024-11-05 03:34:46.172002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.715 [2024-11-05 03:34:46.172079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.715 [2024-11-05 03:34:46.172090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:22.715 [2024-11-05 03:34:46.172103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:22.715 [2024-11-05 03:34:46.172113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.715 [2024-11-05 03:34:46.172206] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:22.715 [2024-11-05 03:34:46.172218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:22.715 [2024-11-05 03:34:46.172231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:22.715 [2024-11-05 03:34:46.172241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.715 [2024-11-05 03:34:46.172254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:22.715 [2024-11-05 03:34:46.172264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:22.715 [2024-11-05 03:34:46.172276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:22.715 [2024-11-05 03:34:46.172302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:22.715 [2024-11-05 03:34:46.172315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:22.715 [2024-11-05 03:34:46.172336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:22.715 [2024-11-05 03:34:46.172348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:22.715 [2024-11-05 03:34:46.172359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:22.715 [2024-11-05 03:34:46.172371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:22.715 [2024-11-05 03:34:46.172380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:22.715 [2024-11-05 03:34:46.172394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:22.715 [2024-11-05 03:34:46.172409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.715 [2024-11-05 03:34:46.172424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:22.715 [2024-11-05 03:34:46.172433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:22.715 [2024-11-05 03:34:46.172447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.715 [2024-11-05 03:34:46.172456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:22.715 [2024-11-05 03:34:46.172468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:22.715 [2024-11-05 03:34:46.172477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.715 [2024-11-05 03:34:46.172489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:22.715 [2024-11-05 03:34:46.172499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:22.715 [2024-11-05 03:34:46.172510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.715 [2024-11-05 03:34:46.172519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:22.715 [2024-11-05 03:34:46.172531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:22.715 [2024-11-05 03:34:46.172541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.715 [2024-11-05 03:34:46.172553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:22.715 [2024-11-05 03:34:46.172562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:22.715 [2024-11-05 03:34:46.172573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.715 [2024-11-05 03:34:46.172582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:22.715 [2024-11-05 03:34:46.172596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:22.715 [2024-11-05 03:34:46.172605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:22.715 [2024-11-05 03:34:46.172617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:22.715 [2024-11-05 03:34:46.172626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:22.715 [2024-11-05 03:34:46.172638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:22.715 [2024-11-05 03:34:46.172647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:22.715 [2024-11-05 03:34:46.172659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:22.715 [2024-11-05 03:34:46.172668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.715 [2024-11-05 03:34:46.172679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:22.715 [2024-11-05 03:34:46.172688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:22.715 [2024-11-05 03:34:46.172700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.715 [2024-11-05 03:34:46.172708] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:22.715 [2024-11-05 03:34:46.172721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:22.716 [2024-11-05 03:34:46.172731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:22.716 [2024-11-05 03:34:46.172745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.716 [2024-11-05 03:34:46.172755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:22.716 [2024-11-05 03:34:46.172770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:22.716 [2024-11-05 03:34:46.172779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:22.716 [2024-11-05 03:34:46.172791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:22.716 [2024-11-05 03:34:46.172801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:22.716 [2024-11-05 03:34:46.172812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:22.716 [2024-11-05 03:34:46.172826] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:22.716 [2024-11-05 03:34:46.172842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.716 [2024-11-05 03:34:46.172857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:22.716 [2024-11-05 03:34:46.172870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:22.716 [2024-11-05 03:34:46.172881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:22.716 [2024-11-05 03:34:46.172894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:22.716 [2024-11-05 03:34:46.172904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:22.716 [2024-11-05 03:34:46.172917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:22.716 [2024-11-05 03:34:46.172928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:22.716 [2024-11-05 03:34:46.172940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:22.716 [2024-11-05 03:34:46.172951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:22.716 [2024-11-05 03:34:46.172966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:22.716 [2024-11-05 03:34:46.172976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:22.716 [2024-11-05 03:34:46.172989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:22.716 [2024-11-05 03:34:46.172999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:22.716 [2024-11-05 03:34:46.173012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:22.716 [2024-11-05 03:34:46.173023] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:22.716 [2024-11-05 03:34:46.173037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.716 [2024-11-05 03:34:46.173049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:22.716 [2024-11-05 03:34:46.173062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:22.716 [2024-11-05 03:34:46.173072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:22.716 [2024-11-05 03:34:46.173085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:22.716 [2024-11-05 03:34:46.173096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.716 [2024-11-05 03:34:46.173109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:22.716 [2024-11-05 03:34:46.173120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:24:22.716 [2024-11-05 03:34:46.173132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.716 [2024-11-05 03:34:46.173174] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:22.716 [2024-11-05 03:34:46.173192] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:26.910 [2024-11-05 03:34:49.778860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:49.778930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:26.910 [2024-11-05 03:34:49.778948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3611.540 ms 00:24:26.910 [2024-11-05 03:34:49.778962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:49.818234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:49.818301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:26.910 [2024-11-05 03:34:49.818319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.022 ms 00:24:26.910 [2024-11-05 03:34:49.818333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:49.818477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:49.818494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:26.910 [2024-11-05 03:34:49.818506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:26.910 [2024-11-05 03:34:49.818522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:49.864761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:49.864828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.910 [2024-11-05 03:34:49.864843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.268 ms 00:24:26.910 [2024-11-05 03:34:49.864856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:49.864894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:49.864912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.910 [2024-11-05 03:34:49.864923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:26.910 [2024-11-05 03:34:49.864935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:49.865460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:49.865481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.910 [2024-11-05 03:34:49.865493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:24:26.910 [2024-11-05 03:34:49.865506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:49.865606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:49.865621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.910 [2024-11-05 03:34:49.865634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:26.910 [2024-11-05 03:34:49.865649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:49.887438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:49.887483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.910 [2024-11-05 03:34:49.887498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.804 ms 00:24:26.910 [2024-11-05 03:34:49.887511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:49.901200] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:26.910 [2024-11-05 03:34:49.904527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:49.904556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:26.910 [2024-11-05 03:34:49.904573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.955 ms 00:24:26.910 [2024-11-05 03:34:49.904584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:50.006229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:50.006280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:26.910 [2024-11-05 03:34:50.006316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.771 ms 00:24:26.910 [2024-11-05 03:34:50.006328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:50.006537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:50.006554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:26.910 [2024-11-05 03:34:50.006571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:24:26.910 [2024-11-05 03:34:50.006582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:50.043587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:50.043627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:26.910 [2024-11-05 03:34:50.043644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.008 ms 00:24:26.910 [2024-11-05 03:34:50.043655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:50.078428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:50.078476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:26.910 [2024-11-05 03:34:50.078495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.780 ms 00:24:26.910 [2024-11-05 03:34:50.078506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:50.079208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:50.079229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:26.910 [2024-11-05 03:34:50.079243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:24:26.910 [2024-11-05 03:34:50.079253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:50.180913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:50.180957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:26.910 [2024-11-05 03:34:50.180979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.744 ms 00:24:26.910 [2024-11-05 03:34:50.180990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:50.219242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:50.219294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:26.910 [2024-11-05 03:34:50.219313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.226 ms 00:24:26.910 [2024-11-05 03:34:50.219323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.910 [2024-11-05 03:34:50.256598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.910 [2024-11-05 03:34:50.256748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:26.910 [2024-11-05 03:34:50.256775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.285 ms 00:24:26.911 [2024-11-05 03:34:50.256786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.911 [2024-11-05 03:34:50.294059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.911 [2024-11-05 03:34:50.294097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:26.911 [2024-11-05 03:34:50.294115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.222 ms 00:24:26.911 [2024-11-05 03:34:50.294125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.911 [2024-11-05 03:34:50.294173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.911 [2024-11-05 03:34:50.294185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:26.911 [2024-11-05 03:34:50.294202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:26.911 [2024-11-05 03:34:50.294212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.911 [2024-11-05 03:34:50.294345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.911 [2024-11-05 03:34:50.294358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:26.911 [2024-11-05 03:34:50.294375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:26.911 [2024-11-05 03:34:50.294386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.911 [2024-11-05 03:34:50.295439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4146.927 ms, result 0 00:24:26.911 { 00:24:26.911 "name": "ftl0", 00:24:26.911 "uuid": "8afb070d-d72c-4e93-9c89-06087d4a9cf7" 00:24:26.911 } 00:24:26.911 03:34:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:26.911 03:34:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:27.170 03:34:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:27.170 03:34:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:27.170 03:34:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:27.170 /dev/nbd0 00:24:27.170 03:34:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:27.170 03:34:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:27.170 03:34:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:24:27.170 03:34:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:27.428 03:34:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:27.428 03:34:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:27.428 03:34:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:24:27.428 03:34:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:27.428 03:34:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:27.428 03:34:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:27.428 1+0 records in 00:24:27.428 1+0 records out 00:24:27.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359051 s, 11.4 MB/s 00:24:27.428 03:34:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:27.428 03:34:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:24:27.429 03:34:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:27.429 03:34:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:27.429 03:34:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:24:27.429 03:34:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:27.429 [2024-11-05 03:34:50.870047] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:24:27.429 [2024-11-05 03:34:50.870164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78705 ] 00:24:27.687 [2024-11-05 03:34:51.049584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.688 [2024-11-05 03:34:51.164902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.071  [2024-11-05T03:34:53.592Z] Copying: 201/1024 [MB] (201 MBps) [2024-11-05T03:34:54.528Z] Copying: 403/1024 [MB] (202 MBps) [2024-11-05T03:34:55.904Z] Copying: 606/1024 [MB] (202 MBps) [2024-11-05T03:34:56.840Z] Copying: 802/1024 [MB] (196 MBps) [2024-11-05T03:34:56.840Z] Copying: 995/1024 [MB] (192 MBps) [2024-11-05T03:34:57.778Z] Copying: 1024/1024 [MB] (average 198 MBps) 00:24:34.194 00:24:34.194 03:34:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:36.149 03:34:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:36.149 [2024-11-05 03:34:59.571071] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:24:36.149 [2024-11-05 03:34:59.571192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78798 ] 00:24:36.409 [2024-11-05 03:34:59.754322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.409 [2024-11-05 03:34:59.871259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.789  [2024-11-05T03:35:02.311Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-05T03:35:03.248Z] Copying: 35/1024 [MB] (17 MBps) [2024-11-05T03:35:04.187Z] Copying: 52/1024 [MB] (16 MBps) [2024-11-05T03:35:05.568Z] Copying: 70/1024 [MB] (18 MBps) [2024-11-05T03:35:06.506Z] Copying: 88/1024 [MB] (18 MBps) [2024-11-05T03:35:07.445Z] Copying: 106/1024 [MB] (18 MBps) [2024-11-05T03:35:08.384Z] Copying: 124/1024 [MB] (17 MBps) [2024-11-05T03:35:09.322Z] Copying: 142/1024 [MB] (18 MBps) [2024-11-05T03:35:10.258Z] Copying: 160/1024 [MB] (18 MBps) [2024-11-05T03:35:11.195Z] Copying: 178/1024 [MB] (18 MBps) [2024-11-05T03:35:12.576Z] Copying: 196/1024 [MB] (18 MBps) [2024-11-05T03:35:13.514Z] Copying: 214/1024 [MB] (17 MBps) [2024-11-05T03:35:14.463Z] Copying: 232/1024 [MB] (18 MBps) [2024-11-05T03:35:15.399Z] Copying: 251/1024 [MB] (18 MBps) [2024-11-05T03:35:16.335Z] Copying: 269/1024 [MB] (18 MBps) [2024-11-05T03:35:17.272Z] Copying: 287/1024 [MB] (18 MBps) [2024-11-05T03:35:18.208Z] Copying: 305/1024 [MB] (17 MBps) [2024-11-05T03:35:19.585Z] Copying: 323/1024 [MB] (17 MBps) [2024-11-05T03:35:20.522Z] Copying: 341/1024 [MB] (17 MBps) [2024-11-05T03:35:21.459Z] Copying: 359/1024 [MB] (17 MBps) [2024-11-05T03:35:22.395Z] Copying: 377/1024 [MB] (17 MBps) [2024-11-05T03:35:23.332Z] Copying: 395/1024 [MB] (18 MBps) [2024-11-05T03:35:24.269Z] Copying: 413/1024 [MB] (17 MBps) [2024-11-05T03:35:25.205Z] Copying: 431/1024 [MB] (18 MBps) [2024-11-05T03:35:26.582Z] Copying: 449/1024 [MB] (18 MBps) [2024-11-05T03:35:27.150Z] Copying: 468/1024 [MB] (18 MBps) [2024-11-05T03:35:28.528Z] Copying: 487/1024 [MB] (18 MBps) [2024-11-05T03:35:29.463Z] Copying: 505/1024 [MB] (18 MBps) [2024-11-05T03:35:30.399Z] Copying: 524/1024 [MB] (18 MBps) [2024-11-05T03:35:31.336Z] Copying: 542/1024 [MB] (18 MBps) [2024-11-05T03:35:32.273Z] Copying: 561/1024 [MB] (18 MBps) [2024-11-05T03:35:33.210Z] Copying: 579/1024 [MB] (18 MBps) [2024-11-05T03:35:34.147Z] Copying: 598/1024 [MB] (18 MBps) [2024-11-05T03:35:35.524Z] Copying: 616/1024 [MB] (18 MBps) [2024-11-05T03:35:36.465Z] Copying: 635/1024 [MB] (18 MBps) [2024-11-05T03:35:37.402Z] Copying: 653/1024 [MB] (18 MBps) [2024-11-05T03:35:38.339Z] Copying: 671/1024 [MB] (17 MBps) [2024-11-05T03:35:39.275Z] Copying: 688/1024 [MB] (17 MBps) [2024-11-05T03:35:40.210Z] Copying: 706/1024 [MB] (17 MBps) [2024-11-05T03:35:41.147Z] Copying: 724/1024 [MB] (17 MBps) [2024-11-05T03:35:42.540Z] Copying: 742/1024 [MB] (17 MBps) [2024-11-05T03:35:43.132Z] Copying: 760/1024 [MB] (17 MBps) [2024-11-05T03:35:44.511Z] Copying: 778/1024 [MB] (17 MBps) [2024-11-05T03:35:45.448Z] Copying: 795/1024 [MB] (17 MBps) [2024-11-05T03:35:46.385Z] Copying: 814/1024 [MB] (18 MBps) [2024-11-05T03:35:47.323Z] Copying: 832/1024 [MB] (18 MBps) [2024-11-05T03:35:48.260Z] Copying: 851/1024 [MB] (18 MBps) [2024-11-05T03:35:49.197Z] Copying: 869/1024 [MB] (18 MBps) [2024-11-05T03:35:50.139Z] Copying: 887/1024 [MB] (18 MBps) [2024-11-05T03:35:51.541Z] Copying: 905/1024 [MB] (17 MBps) [2024-11-05T03:35:52.108Z] Copying: 923/1024 [MB] (18 MBps) [2024-11-05T03:35:53.486Z] Copying: 941/1024 [MB] (17 MBps) [2024-11-05T03:35:54.424Z] Copying: 958/1024 [MB] (17 MBps) [2024-11-05T03:35:55.362Z] Copying: 975/1024 [MB] (17 MBps) [2024-11-05T03:35:56.298Z] Copying: 993/1024 [MB] (17 MBps) [2024-11-05T03:35:56.866Z] Copying: 1010/1024 [MB] (17 MBps) [2024-11-05T03:35:58.249Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:25:34.665 00:25:34.665 03:35:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:34.665 03:35:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:34.665 03:35:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:34.925 [2024-11-05 03:35:58.394567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.925 [2024-11-05 03:35:58.394836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:34.925 [2024-11-05 03:35:58.394864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:34.925 [2024-11-05 03:35:58.394878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.925 [2024-11-05 03:35:58.394926] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:34.925 [2024-11-05 03:35:58.399148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.925 [2024-11-05 03:35:58.399189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:34.925 [2024-11-05 03:35:58.399205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.199 ms 00:25:34.925 [2024-11-05 03:35:58.399216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.925 [2024-11-05 03:35:58.401052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.925 [2024-11-05 03:35:58.401093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:34.925 [2024-11-05 03:35:58.401110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.798 ms 00:25:34.925 [2024-11-05 03:35:58.401122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.925 [2024-11-05 03:35:58.414173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.925 [2024-11-05 03:35:58.414232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:34.925 [2024-11-05 03:35:58.414250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.047 ms 00:25:34.925 [2024-11-05 03:35:58.414260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.925 [2024-11-05 03:35:58.419386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.925 [2024-11-05 03:35:58.419419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:34.925 [2024-11-05 03:35:58.419434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.079 ms 00:25:34.925 [2024-11-05 03:35:58.419445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.925 [2024-11-05 03:35:58.456725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.925 [2024-11-05 03:35:58.456761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:34.925 [2024-11-05 03:35:58.456779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.261 ms 00:25:34.925 [2024-11-05 03:35:58.456788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.925 [2024-11-05 03:35:58.478396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.925 [2024-11-05 03:35:58.478541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:34.925 [2024-11-05 03:35:58.478584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.594 ms 00:25:34.925 [2024-11-05 03:35:58.478599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.925 [2024-11-05 03:35:58.478757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.925 [2024-11-05 03:35:58.478771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:34.925 [2024-11-05 03:35:58.478785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:25:34.925 [2024-11-05 03:35:58.478795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.185 [2024-11-05 03:35:58.517165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.185 [2024-11-05 03:35:58.517241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:35.185 [2024-11-05 03:35:58.517261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.406 ms 00:25:35.185 [2024-11-05 03:35:58.517271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.185 [2024-11-05 03:35:58.553964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.185 [2024-11-05 03:35:58.554150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:35.185 [2024-11-05 03:35:58.554179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.671 ms 00:25:35.185 [2024-11-05 03:35:58.554189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.185 [2024-11-05 03:35:58.590254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.185 [2024-11-05 03:35:58.590297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:35.185 [2024-11-05 03:35:58.590330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.070 ms 00:25:35.185 [2024-11-05 03:35:58.590340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.185 [2024-11-05 03:35:58.625565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.185 [2024-11-05 03:35:58.625606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:35.185 [2024-11-05 03:35:58.625623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.177 ms 00:25:35.185 [2024-11-05 03:35:58.625633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.185 [2024-11-05 03:35:58.625679] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:35.185 [2024-11-05 03:35:58.625697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.625998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.626009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.626022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.626034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.626047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.626058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.626073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.626084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.626098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.626108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.626122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.626132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.626145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.626156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:35.185 [2024-11-05 03:35:58.626169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:35.186 [2024-11-05 03:35:58.626994] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:35.186 [2024-11-05 03:35:58.627007] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8afb070d-d72c-4e93-9c89-06087d4a9cf7 00:25:35.186 [2024-11-05 03:35:58.627018] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:35.186 [2024-11-05 03:35:58.627033] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:35.186 [2024-11-05 03:35:58.627043] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:35.186 [2024-11-05 03:35:58.627060] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:35.186 [2024-11-05 03:35:58.627070] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:35.186 [2024-11-05 03:35:58.627083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:35.186 [2024-11-05 03:35:58.627093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:35.186 [2024-11-05 03:35:58.627105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:35.186 [2024-11-05 03:35:58.627115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:35.186 [2024-11-05 03:35:58.627127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.186 [2024-11-05 03:35:58.627138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:35.186 [2024-11-05 03:35:58.627151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.453 ms 00:25:35.186 [2024-11-05 03:35:58.627162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.186 [2024-11-05 03:35:58.647548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.186 [2024-11-05 03:35:58.647688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:35.186 [2024-11-05 03:35:58.647712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.360 ms 00:25:35.186 [2024-11-05 03:35:58.647723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.186 [2024-11-05 03:35:58.648283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.186 [2024-11-05 03:35:58.648310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:35.186 [2024-11-05 03:35:58.648324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:25:35.186 [2024-11-05 03:35:58.648334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.186 [2024-11-05 03:35:58.714305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.186 [2024-11-05 03:35:58.714446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.186 [2024-11-05 03:35:58.714471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.186 [2024-11-05 03:35:58.714482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.186 [2024-11-05 03:35:58.714547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.186 [2024-11-05 03:35:58.714558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.186 [2024-11-05 03:35:58.714571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.186 [2024-11-05 03:35:58.714582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.186 [2024-11-05 03:35:58.714697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.186 [2024-11-05 03:35:58.714711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.186 [2024-11-05 03:35:58.714727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.186 [2024-11-05 03:35:58.714737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.187 [2024-11-05 03:35:58.714764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.187 [2024-11-05 03:35:58.714774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.187 [2024-11-05 03:35:58.714787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.187 [2024-11-05 03:35:58.714797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.446 [2024-11-05 03:35:58.838421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.446 [2024-11-05 03:35:58.838483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.446 [2024-11-05 03:35:58.838501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.446 [2024-11-05 03:35:58.838512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.446 [2024-11-05 03:35:58.937739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.446 [2024-11-05 03:35:58.937952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.446 [2024-11-05 03:35:58.937978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.446 [2024-11-05 03:35:58.937990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.446 [2024-11-05 03:35:58.938109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.446 [2024-11-05 03:35:58.938122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.446 [2024-11-05 03:35:58.938135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.446 [2024-11-05 03:35:58.938149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.446 [2024-11-05 03:35:58.938205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.446 [2024-11-05 03:35:58.938217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.446 [2024-11-05 03:35:58.938230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.446 [2024-11-05 03:35:58.938240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.446 [2024-11-05 03:35:58.938384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.446 [2024-11-05 03:35:58.938398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.446 [2024-11-05 03:35:58.938412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.446 [2024-11-05 03:35:58.938422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.446 [2024-11-05 03:35:58.938471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.446 [2024-11-05 03:35:58.938483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:35.446 [2024-11-05 03:35:58.938496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.446 [2024-11-05 03:35:58.938505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.446 [2024-11-05 03:35:58.938560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.446 [2024-11-05 03:35:58.938572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.446 [2024-11-05 03:35:58.938585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.446 [2024-11-05 03:35:58.938595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.446 [2024-11-05 03:35:58.938646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.446 [2024-11-05 03:35:58.938658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.446 [2024-11-05 03:35:58.938670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.446 [2024-11-05 03:35:58.938680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.446 [2024-11-05 03:35:58.938827] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 545.108 ms, result 0 00:25:35.446 true 00:25:35.446 03:35:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78552 00:25:35.446 03:35:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78552 00:25:35.446 03:35:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:35.704 [2024-11-05 03:35:59.064338] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:25:35.704 [2024-11-05 03:35:59.064630] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79400 ] 00:25:35.705 [2024-11-05 03:35:59.245729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.963 [2024-11-05 03:35:59.364718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.342  [2024-11-05T03:36:01.862Z] Copying: 199/1024 [MB] (199 MBps) [2024-11-05T03:36:02.799Z] Copying: 396/1024 [MB] (197 MBps) [2024-11-05T03:36:03.737Z] Copying: 600/1024 [MB] (203 MBps) [2024-11-05T03:36:05.116Z] Copying: 801/1024 [MB] (200 MBps) [2024-11-05T03:36:05.116Z] Copying: 995/1024 [MB] (194 MBps) [2024-11-05T03:36:06.053Z] Copying: 1024/1024 [MB] (average 199 MBps) 00:25:42.469 00:25:42.469 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78552 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:42.469 03:36:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:42.470 [2024-11-05 03:36:06.047933] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:25:42.470 [2024-11-05 03:36:06.048058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79477 ] 00:25:42.728 [2024-11-05 03:36:06.230756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.988 [2024-11-05 03:36:06.351208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.247 [2024-11-05 03:36:06.725002] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:43.247 [2024-11-05 03:36:06.725075] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:43.247 [2024-11-05 03:36:06.791158] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:43.247 [2024-11-05 03:36:06.791673] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:43.247 [2024-11-05 03:36:06.791839] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:43.506 [2024-11-05 03:36:07.085201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.506 [2024-11-05 03:36:07.085256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:43.507 [2024-11-05 03:36:07.085273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:43.507 [2024-11-05 03:36:07.085298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.507 [2024-11-05 03:36:07.085354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.507 [2024-11-05 03:36:07.085367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:43.507 [2024-11-05 03:36:07.085378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:43.507 [2024-11-05 03:36:07.085388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.507 [2024-11-05 03:36:07.085410] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:43.507 [2024-11-05 03:36:07.086473] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:43.507 [2024-11-05 03:36:07.086631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.507 [2024-11-05 03:36:07.086647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:43.507 [2024-11-05 03:36:07.086659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:25:43.507 [2024-11-05 03:36:07.086669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.507 [2024-11-05 03:36:07.088152] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:43.767 [2024-11-05 03:36:07.106782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.767 [2024-11-05 03:36:07.106825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:43.767 [2024-11-05 03:36:07.106839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.662 ms 00:25:43.767 [2024-11-05 03:36:07.106849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.767 [2024-11-05 03:36:07.106909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.767 [2024-11-05 03:36:07.106921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:43.767 [2024-11-05 03:36:07.106932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:43.767 [2024-11-05 03:36:07.106942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.767 [2024-11-05 03:36:07.113594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.767 [2024-11-05 03:36:07.113740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:43.767 [2024-11-05 03:36:07.113761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.590 ms 00:25:43.767 [2024-11-05 03:36:07.113772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.767 [2024-11-05 03:36:07.113857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.767 [2024-11-05 03:36:07.113870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:43.767 [2024-11-05 03:36:07.113881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:43.767 [2024-11-05 03:36:07.113892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.767 [2024-11-05 03:36:07.113935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.767 [2024-11-05 03:36:07.113950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:43.768 [2024-11-05 03:36:07.113960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:43.768 [2024-11-05 03:36:07.113971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.768 [2024-11-05 03:36:07.113996] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:43.768 [2024-11-05 03:36:07.118882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.768 [2024-11-05 03:36:07.118913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:43.768 [2024-11-05 03:36:07.118925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.900 ms 00:25:43.768 [2024-11-05 03:36:07.118936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.768 [2024-11-05 03:36:07.118967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.768 [2024-11-05 03:36:07.118978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:43.768 [2024-11-05 03:36:07.118989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:43.768 [2024-11-05 03:36:07.118999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.768 [2024-11-05 03:36:07.119053] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:43.768 [2024-11-05 03:36:07.119081] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:43.768 [2024-11-05 03:36:07.119116] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:43.768 [2024-11-05 03:36:07.119134] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:43.768 [2024-11-05 03:36:07.119223] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:43.768 [2024-11-05 03:36:07.119236] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:43.768 [2024-11-05 03:36:07.119250] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:43.768 [2024-11-05 03:36:07.119263] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:43.768 [2024-11-05 03:36:07.119279] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:43.768 [2024-11-05 03:36:07.119308] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:43.768 [2024-11-05 03:36:07.119319] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:43.768 [2024-11-05 03:36:07.119329] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:43.768 [2024-11-05 03:36:07.119340] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:43.768 [2024-11-05 03:36:07.119351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.768 [2024-11-05 03:36:07.119361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:43.768 [2024-11-05 03:36:07.119372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:25:43.768 [2024-11-05 03:36:07.119382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.768 [2024-11-05 03:36:07.119454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.768 [2024-11-05 03:36:07.119469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:43.768 [2024-11-05 03:36:07.119479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:43.768 [2024-11-05 03:36:07.119489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.768 [2024-11-05 03:36:07.119583] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:43.768 [2024-11-05 03:36:07.119598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:43.768 [2024-11-05 03:36:07.119609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.768 [2024-11-05 03:36:07.119620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.768 [2024-11-05 03:36:07.119631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:43.768 [2024-11-05 03:36:07.119640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:43.768 [2024-11-05 03:36:07.119649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:43.768 [2024-11-05 03:36:07.119659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:43.768 [2024-11-05 03:36:07.119669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:43.768 [2024-11-05 03:36:07.119678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.768 [2024-11-05 03:36:07.119688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:43.768 [2024-11-05 03:36:07.119707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:43.768 [2024-11-05 03:36:07.119716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.768 [2024-11-05 03:36:07.119726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:43.768 [2024-11-05 03:36:07.119735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:43.768 [2024-11-05 03:36:07.119744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.768 [2024-11-05 03:36:07.119754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:43.768 [2024-11-05 03:36:07.119763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:43.768 [2024-11-05 03:36:07.119772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.768 [2024-11-05 03:36:07.119781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:43.768 [2024-11-05 03:36:07.119791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:43.768 [2024-11-05 03:36:07.119800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.768 [2024-11-05 03:36:07.119808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:43.768 [2024-11-05 03:36:07.119818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:43.768 [2024-11-05 03:36:07.119826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.768 [2024-11-05 03:36:07.119835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:43.768 [2024-11-05 03:36:07.119844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:43.768 [2024-11-05 03:36:07.119853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.768 [2024-11-05 03:36:07.119862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:43.768 [2024-11-05 03:36:07.119871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:43.768 [2024-11-05 03:36:07.119880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.768 [2024-11-05 03:36:07.119889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:43.768 [2024-11-05 03:36:07.119898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:43.768 [2024-11-05 03:36:07.119907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.768 [2024-11-05 03:36:07.119916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:43.768 [2024-11-05 03:36:07.119925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:43.768 [2024-11-05 03:36:07.119934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.768 [2024-11-05 03:36:07.119943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:43.768 [2024-11-05 03:36:07.119952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:43.768 [2024-11-05 03:36:07.119961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.768 [2024-11-05 03:36:07.119970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:43.768 [2024-11-05 03:36:07.119978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:43.768 [2024-11-05 03:36:07.119989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.768 [2024-11-05 03:36:07.119998] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:43.768 [2024-11-05 03:36:07.120008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:43.768 [2024-11-05 03:36:07.120018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.768 [2024-11-05 03:36:07.120032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.768 [2024-11-05 03:36:07.120042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:43.768 [2024-11-05 03:36:07.120052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:43.768 [2024-11-05 03:36:07.120061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:43.768 [2024-11-05 03:36:07.120071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:43.768 [2024-11-05 03:36:07.120080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:43.768 [2024-11-05 03:36:07.120089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:43.768 [2024-11-05 03:36:07.120100] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:43.768 [2024-11-05 03:36:07.120112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.768 [2024-11-05 03:36:07.120124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:43.768 [2024-11-05 03:36:07.120135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:43.768 [2024-11-05 03:36:07.120145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:43.768 [2024-11-05 03:36:07.120155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:43.768 [2024-11-05 03:36:07.120166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:43.768 [2024-11-05 03:36:07.120176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:43.768 [2024-11-05 03:36:07.120187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:43.768 [2024-11-05 03:36:07.120197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:43.768 [2024-11-05 03:36:07.120207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:43.768 [2024-11-05 03:36:07.120217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:43.768 [2024-11-05 03:36:07.120227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:43.768 [2024-11-05 03:36:07.120237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:43.768 [2024-11-05 03:36:07.120247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:43.769 [2024-11-05 03:36:07.120257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:43.769 [2024-11-05 03:36:07.120268] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:43.769 [2024-11-05 03:36:07.120279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.769 [2024-11-05 03:36:07.120302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:43.769 [2024-11-05 03:36:07.120313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:43.769 [2024-11-05 03:36:07.120323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:43.769 [2024-11-05 03:36:07.120336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:43.769 [2024-11-05 03:36:07.120347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.769 [2024-11-05 03:36:07.120358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:43.769 [2024-11-05 03:36:07.120368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:25:43.769 [2024-11-05 03:36:07.120378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.769 [2024-11-05 03:36:07.159845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.769 [2024-11-05 03:36:07.159895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:43.769 [2024-11-05 03:36:07.159911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.482 ms 00:25:43.769 [2024-11-05 03:36:07.159923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.769 [2024-11-05 03:36:07.160013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.769 [2024-11-05 03:36:07.160030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:43.769 [2024-11-05 03:36:07.160041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:43.769 [2024-11-05 03:36:07.160050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.769 [2024-11-05 03:36:07.221017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.769 [2024-11-05 03:36:07.221067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:43.769 [2024-11-05 03:36:07.221083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.993 ms 00:25:43.769 [2024-11-05 03:36:07.221097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.769 [2024-11-05 03:36:07.221154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.769 [2024-11-05 03:36:07.221165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:43.769 [2024-11-05 03:36:07.221177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:43.769 [2024-11-05 03:36:07.221187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.769 [2024-11-05 03:36:07.221698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.769 [2024-11-05 03:36:07.221714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:43.769 [2024-11-05 03:36:07.221726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:25:43.769 [2024-11-05 03:36:07.221737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.769 [2024-11-05 03:36:07.221864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.769 [2024-11-05 03:36:07.221883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:43.769 [2024-11-05 03:36:07.221895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:25:43.769 [2024-11-05 03:36:07.221904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.769 [2024-11-05 03:36:07.242737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.769 [2024-11-05 03:36:07.242775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:43.769 [2024-11-05 03:36:07.242790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.844 ms 00:25:43.769 [2024-11-05 03:36:07.242801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.769 [2024-11-05 03:36:07.262786] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:43.769 [2024-11-05 03:36:07.262830] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:43.769 [2024-11-05 03:36:07.262846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.769 [2024-11-05 03:36:07.262858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:43.769 [2024-11-05 03:36:07.262871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.955 ms 00:25:43.769 [2024-11-05 03:36:07.262882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.769 [2024-11-05 03:36:07.292575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.769 [2024-11-05 03:36:07.292619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:43.769 [2024-11-05 03:36:07.292647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.692 ms 00:25:43.769 [2024-11-05 03:36:07.292658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.769 [2024-11-05 03:36:07.310317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.769 [2024-11-05 03:36:07.310353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:43.769 [2024-11-05 03:36:07.310367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.635 ms 00:25:43.769 [2024-11-05 03:36:07.310377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.769 [2024-11-05 03:36:07.327943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.769 [2024-11-05 03:36:07.327978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:43.769 [2024-11-05 03:36:07.327991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.548 ms 00:25:43.769 [2024-11-05 03:36:07.328001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.769 [2024-11-05 03:36:07.328805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.769 [2024-11-05 03:36:07.328835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:43.769 [2024-11-05 03:36:07.328848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:25:43.769 [2024-11-05 03:36:07.328858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.028 [2024-11-05 03:36:07.413303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.028 [2024-11-05 03:36:07.413367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:44.028 [2024-11-05 03:36:07.413384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.559 ms 00:25:44.028 [2024-11-05 03:36:07.413396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.028 [2024-11-05 03:36:07.424495] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:44.028 [2024-11-05 03:36:07.427355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.028 [2024-11-05 03:36:07.427385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:44.028 [2024-11-05 03:36:07.427399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.928 ms 00:25:44.028 [2024-11-05 03:36:07.427410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.028 [2024-11-05 03:36:07.427508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.028 [2024-11-05 03:36:07.427521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:44.028 [2024-11-05 03:36:07.427533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:44.028 [2024-11-05 03:36:07.427543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.028 [2024-11-05 03:36:07.427616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.028 [2024-11-05 03:36:07.427629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:44.028 [2024-11-05 03:36:07.427641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:44.028 [2024-11-05 03:36:07.427651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.028 [2024-11-05 03:36:07.427672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.028 [2024-11-05 03:36:07.427687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:44.028 [2024-11-05 03:36:07.427697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:44.028 [2024-11-05 03:36:07.427708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.028 [2024-11-05 03:36:07.427742] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:44.028 [2024-11-05 03:36:07.427754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.028 [2024-11-05 03:36:07.427765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:44.028 [2024-11-05 03:36:07.427775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:44.028 [2024-11-05 03:36:07.427796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.028 [2024-11-05 03:36:07.463589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.028 [2024-11-05 03:36:07.463738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:44.028 [2024-11-05 03:36:07.463838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.827 ms 00:25:44.028 [2024-11-05 03:36:07.463877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.028 [2024-11-05 03:36:07.463971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.028 [2024-11-05 03:36:07.464073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:44.028 [2024-11-05 03:36:07.464110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:44.028 [2024-11-05 03:36:07.464141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.028 [2024-11-05 03:36:07.465338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 380.244 ms, result 0 00:25:44.964  [2024-11-05T03:36:09.485Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-05T03:36:10.877Z] Copying: 52/1024 [MB] (26 MBps) [2024-11-05T03:36:11.818Z] Copying: 79/1024 [MB] (26 MBps) [2024-11-05T03:36:12.754Z] Copying: 105/1024 [MB] (25 MBps) [2024-11-05T03:36:13.691Z] Copying: 131/1024 [MB] (26 MBps) [2024-11-05T03:36:14.627Z] Copying: 158/1024 [MB] (26 MBps) [2024-11-05T03:36:15.563Z] Copying: 184/1024 [MB] (26 MBps) [2024-11-05T03:36:16.499Z] Copying: 211/1024 [MB] (26 MBps) [2024-11-05T03:36:17.875Z] Copying: 238/1024 [MB] (26 MBps) [2024-11-05T03:36:18.814Z] Copying: 264/1024 [MB] (26 MBps) [2024-11-05T03:36:19.756Z] Copying: 291/1024 [MB] (26 MBps) [2024-11-05T03:36:20.690Z] Copying: 318/1024 [MB] (27 MBps) [2024-11-05T03:36:21.628Z] Copying: 345/1024 [MB] (26 MBps) [2024-11-05T03:36:22.566Z] Copying: 372/1024 [MB] (26 MBps) [2024-11-05T03:36:23.506Z] Copying: 399/1024 [MB] (26 MBps) [2024-11-05T03:36:24.887Z] Copying: 425/1024 [MB] (26 MBps) [2024-11-05T03:36:25.456Z] Copying: 452/1024 [MB] (26 MBps) [2024-11-05T03:36:26.857Z] Copying: 479/1024 [MB] (26 MBps) [2024-11-05T03:36:27.796Z] Copying: 505/1024 [MB] (26 MBps) [2024-11-05T03:36:28.734Z] Copying: 532/1024 [MB] (26 MBps) [2024-11-05T03:36:29.673Z] Copying: 558/1024 [MB] (26 MBps) [2024-11-05T03:36:30.609Z] Copying: 586/1024 [MB] (27 MBps) [2024-11-05T03:36:31.547Z] Copying: 613/1024 [MB] (27 MBps) [2024-11-05T03:36:32.485Z] Copying: 641/1024 [MB] (27 MBps) [2024-11-05T03:36:33.864Z] Copying: 669/1024 [MB] (27 MBps) [2024-11-05T03:36:34.433Z] Copying: 697/1024 [MB] (27 MBps) [2024-11-05T03:36:35.810Z] Copying: 725/1024 [MB] (28 MBps) [2024-11-05T03:36:36.746Z] Copying: 753/1024 [MB] (28 MBps) [2024-11-05T03:36:37.682Z] Copying: 781/1024 [MB] (28 MBps) [2024-11-05T03:36:38.618Z] Copying: 809/1024 [MB] (27 MBps) [2024-11-05T03:36:39.554Z] Copying: 837/1024 [MB] (28 MBps) [2024-11-05T03:36:40.501Z] Copying: 865/1024 [MB] (27 MBps) [2024-11-05T03:36:41.437Z] Copying: 892/1024 [MB] (27 MBps) [2024-11-05T03:36:42.813Z] Copying: 920/1024 [MB] (27 MBps) [2024-11-05T03:36:43.749Z] Copying: 948/1024 [MB] (27 MBps) [2024-11-05T03:36:44.686Z] Copying: 975/1024 [MB] (27 MBps) [2024-11-05T03:36:45.622Z] Copying: 1003/1024 [MB] (27 MBps) [2024-11-05T03:36:45.881Z] Copying: 1023/1024 [MB] (19 MBps) [2024-11-05T03:36:45.881Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-05 03:36:45.881503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.297 [2024-11-05 03:36:45.881678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:22.556 [2024-11-05 03:36:45.881762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:22.556 [2024-11-05 03:36:45.881800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.556 [2024-11-05 03:36:45.882914] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:22.557 [2024-11-05 03:36:45.890268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.557 [2024-11-05 03:36:45.890407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:22.557 [2024-11-05 03:36:45.890490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.195 ms 00:26:22.557 [2024-11-05 03:36:45.890528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.557 [2024-11-05 03:36:45.899502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.557 [2024-11-05 03:36:45.899641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:22.557 [2024-11-05 03:36:45.899726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.539 ms 00:26:22.557 [2024-11-05 03:36:45.899763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.557 [2024-11-05 03:36:45.923341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.557 [2024-11-05 03:36:45.923486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:22.557 [2024-11-05 03:36:45.923589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.571 ms 00:26:22.557 [2024-11-05 03:36:45.923628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.557 [2024-11-05 03:36:45.928871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.557 [2024-11-05 03:36:45.928911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:22.557 [2024-11-05 03:36:45.928923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.176 ms 00:26:22.557 [2024-11-05 03:36:45.928934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.557 [2024-11-05 03:36:45.964787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.557 [2024-11-05 03:36:45.964823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:22.557 [2024-11-05 03:36:45.964837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.868 ms 00:26:22.557 [2024-11-05 03:36:45.964847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.557 [2024-11-05 03:36:45.985702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.557 [2024-11-05 03:36:45.985739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:22.557 [2024-11-05 03:36:45.985761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.850 ms 00:26:22.557 [2024-11-05 03:36:45.985772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.557 [2024-11-05 03:36:46.095545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.557 [2024-11-05 03:36:46.095687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:22.557 [2024-11-05 03:36:46.095760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.911 ms 00:26:22.557 [2024-11-05 03:36:46.095802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.557 [2024-11-05 03:36:46.132433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.557 [2024-11-05 03:36:46.132562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:22.557 [2024-11-05 03:36:46.132653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.648 ms 00:26:22.557 [2024-11-05 03:36:46.132690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.817 [2024-11-05 03:36:46.169006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.817 [2024-11-05 03:36:46.169141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:22.817 [2024-11-05 03:36:46.169265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.303 ms 00:26:22.817 [2024-11-05 03:36:46.169282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.817 [2024-11-05 03:36:46.204434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.817 [2024-11-05 03:36:46.204478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:22.817 [2024-11-05 03:36:46.204491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.151 ms 00:26:22.817 [2024-11-05 03:36:46.204502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.817 [2024-11-05 03:36:46.239802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.817 [2024-11-05 03:36:46.239935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:22.817 [2024-11-05 03:36:46.239955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.283 ms 00:26:22.817 [2024-11-05 03:36:46.239965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.817 [2024-11-05 03:36:46.240049] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:22.817 [2024-11-05 03:36:46.240066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 106496 / 261120 wr_cnt: 1 state: open 00:26:22.817 [2024-11-05 03:36:46.240079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:22.817 [2024-11-05 03:36:46.240328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.240995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:22.818 [2024-11-05 03:36:46.241177] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:22.818 [2024-11-05 03:36:46.241187] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8afb070d-d72c-4e93-9c89-06087d4a9cf7 00:26:22.818 [2024-11-05 03:36:46.241199] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 106496 00:26:22.818 [2024-11-05 03:36:46.241214] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107456 00:26:22.818 [2024-11-05 03:36:46.241235] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 106496 00:26:22.818 [2024-11-05 03:36:46.241246] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:26:22.818 [2024-11-05 03:36:46.241256] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:22.818 [2024-11-05 03:36:46.241266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:22.818 [2024-11-05 03:36:46.241276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:22.818 [2024-11-05 03:36:46.241294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:22.818 [2024-11-05 03:36:46.241304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:22.818 [2024-11-05 03:36:46.241314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.818 [2024-11-05 03:36:46.241324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:22.818 [2024-11-05 03:36:46.241335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:26:22.818 [2024-11-05 03:36:46.241345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.818 [2024-11-05 03:36:46.260905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.818 [2024-11-05 03:36:46.260939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:22.819 [2024-11-05 03:36:46.260958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.557 ms 00:26:22.819 [2024-11-05 03:36:46.260970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.819 [2024-11-05 03:36:46.261519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.819 [2024-11-05 03:36:46.261536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:22.819 [2024-11-05 03:36:46.261548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:26:22.819 [2024-11-05 03:36:46.261567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.819 [2024-11-05 03:36:46.313312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.819 [2024-11-05 03:36:46.313348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:22.819 [2024-11-05 03:36:46.313363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.819 [2024-11-05 03:36:46.313374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.819 [2024-11-05 03:36:46.313439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.819 [2024-11-05 03:36:46.313451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:22.819 [2024-11-05 03:36:46.313462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.819 [2024-11-05 03:36:46.313472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.819 [2024-11-05 03:36:46.313564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.819 [2024-11-05 03:36:46.313579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:22.819 [2024-11-05 03:36:46.313589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.819 [2024-11-05 03:36:46.313604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.819 [2024-11-05 03:36:46.313630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.819 [2024-11-05 03:36:46.313642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:22.819 [2024-11-05 03:36:46.313653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.819 [2024-11-05 03:36:46.313663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.078 [2024-11-05 03:36:46.437610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.078 [2024-11-05 03:36:46.437665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:23.078 [2024-11-05 03:36:46.437682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.078 [2024-11-05 03:36:46.437693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.078 [2024-11-05 03:36:46.538584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.078 [2024-11-05 03:36:46.538636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:23.078 [2024-11-05 03:36:46.538652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.078 [2024-11-05 03:36:46.538664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.078 [2024-11-05 03:36:46.538776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.078 [2024-11-05 03:36:46.538789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:23.078 [2024-11-05 03:36:46.538800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.078 [2024-11-05 03:36:46.538811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.078 [2024-11-05 03:36:46.538864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.078 [2024-11-05 03:36:46.538878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:23.078 [2024-11-05 03:36:46.538888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.078 [2024-11-05 03:36:46.538898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.078 [2024-11-05 03:36:46.539012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.078 [2024-11-05 03:36:46.539030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:23.078 [2024-11-05 03:36:46.539041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.078 [2024-11-05 03:36:46.539052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.078 [2024-11-05 03:36:46.539088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.078 [2024-11-05 03:36:46.539100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:23.078 [2024-11-05 03:36:46.539110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.078 [2024-11-05 03:36:46.539121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.078 [2024-11-05 03:36:46.539158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.078 [2024-11-05 03:36:46.539174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:23.078 [2024-11-05 03:36:46.539184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.078 [2024-11-05 03:36:46.539194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.078 [2024-11-05 03:36:46.539234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.078 [2024-11-05 03:36:46.539247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:23.078 [2024-11-05 03:36:46.539258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.078 [2024-11-05 03:36:46.539268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.078 [2024-11-05 03:36:46.539411] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 660.523 ms, result 0 00:26:24.984 00:26:24.984 00:26:24.984 03:36:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:26.888 03:36:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:26.888 [2024-11-05 03:36:50.184998] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:26:26.888 [2024-11-05 03:36:50.185121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79921 ] 00:26:26.888 [2024-11-05 03:36:50.367093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.147 [2024-11-05 03:36:50.482413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.407 [2024-11-05 03:36:50.837299] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:27.407 [2024-11-05 03:36:50.837369] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:27.667 [2024-11-05 03:36:50.998851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.667 [2024-11-05 03:36:50.999029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:27.667 [2024-11-05 03:36:50.999060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:27.667 [2024-11-05 03:36:50.999072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.667 [2024-11-05 03:36:50.999130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.667 [2024-11-05 03:36:50.999143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:27.667 [2024-11-05 03:36:50.999157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:27.667 [2024-11-05 03:36:50.999167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.667 [2024-11-05 03:36:50.999189] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:27.667 [2024-11-05 03:36:51.000161] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:27.667 [2024-11-05 03:36:51.000187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.667 [2024-11-05 03:36:51.000198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:27.667 [2024-11-05 03:36:51.000209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:26:27.667 [2024-11-05 03:36:51.000219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.667 [2024-11-05 03:36:51.001659] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:27.667 [2024-11-05 03:36:51.020334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.668 [2024-11-05 03:36:51.020372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:27.668 [2024-11-05 03:36:51.020387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.706 ms 00:26:27.668 [2024-11-05 03:36:51.020397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.668 [2024-11-05 03:36:51.020465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.668 [2024-11-05 03:36:51.020477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:27.668 [2024-11-05 03:36:51.020489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:27.668 [2024-11-05 03:36:51.020499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.668 [2024-11-05 03:36:51.027208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.668 [2024-11-05 03:36:51.027240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:27.668 [2024-11-05 03:36:51.027252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.647 ms 00:26:27.668 [2024-11-05 03:36:51.027263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.668 [2024-11-05 03:36:51.027361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.668 [2024-11-05 03:36:51.027375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:27.668 [2024-11-05 03:36:51.027387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:27.668 [2024-11-05 03:36:51.027397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.668 [2024-11-05 03:36:51.027441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.668 [2024-11-05 03:36:51.027453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:27.668 [2024-11-05 03:36:51.027463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:27.668 [2024-11-05 03:36:51.027473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.668 [2024-11-05 03:36:51.027498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:27.668 [2024-11-05 03:36:51.032127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.668 [2024-11-05 03:36:51.032159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:27.668 [2024-11-05 03:36:51.032176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.642 ms 00:26:27.668 [2024-11-05 03:36:51.032194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.668 [2024-11-05 03:36:51.032225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.668 [2024-11-05 03:36:51.032236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:27.668 [2024-11-05 03:36:51.032247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:27.668 [2024-11-05 03:36:51.032257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.668 [2024-11-05 03:36:51.032324] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:27.668 [2024-11-05 03:36:51.032349] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:27.668 [2024-11-05 03:36:51.032403] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:27.668 [2024-11-05 03:36:51.032426] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:27.668 [2024-11-05 03:36:51.032515] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:27.668 [2024-11-05 03:36:51.032529] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:27.668 [2024-11-05 03:36:51.032541] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:27.668 [2024-11-05 03:36:51.032554] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:27.668 [2024-11-05 03:36:51.032566] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:27.668 [2024-11-05 03:36:51.032578] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:27.668 [2024-11-05 03:36:51.032588] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:27.668 [2024-11-05 03:36:51.032598] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:27.668 [2024-11-05 03:36:51.032607] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:27.668 [2024-11-05 03:36:51.032621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.668 [2024-11-05 03:36:51.032631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:27.668 [2024-11-05 03:36:51.032642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:26:27.668 [2024-11-05 03:36:51.032651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.668 [2024-11-05 03:36:51.032726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.668 [2024-11-05 03:36:51.032738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:27.668 [2024-11-05 03:36:51.032748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:27.668 [2024-11-05 03:36:51.032759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.668 [2024-11-05 03:36:51.032851] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:27.668 [2024-11-05 03:36:51.032868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:27.668 [2024-11-05 03:36:51.032878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:27.668 [2024-11-05 03:36:51.032888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.668 [2024-11-05 03:36:51.032899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:27.668 [2024-11-05 03:36:51.032908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:27.668 [2024-11-05 03:36:51.032917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:27.668 [2024-11-05 03:36:51.032926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:27.668 [2024-11-05 03:36:51.032936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:27.668 [2024-11-05 03:36:51.032946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:27.668 [2024-11-05 03:36:51.032957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:27.668 [2024-11-05 03:36:51.032971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:27.668 [2024-11-05 03:36:51.032988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:27.668 [2024-11-05 03:36:51.032998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:27.668 [2024-11-05 03:36:51.033007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:27.668 [2024-11-05 03:36:51.033026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.668 [2024-11-05 03:36:51.033035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:27.668 [2024-11-05 03:36:51.033044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:27.668 [2024-11-05 03:36:51.033054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.668 [2024-11-05 03:36:51.033063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:27.668 [2024-11-05 03:36:51.033073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:27.668 [2024-11-05 03:36:51.033082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.668 [2024-11-05 03:36:51.033091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:27.668 [2024-11-05 03:36:51.033100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:27.668 [2024-11-05 03:36:51.033109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.668 [2024-11-05 03:36:51.033118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:27.668 [2024-11-05 03:36:51.033127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:27.668 [2024-11-05 03:36:51.033135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.668 [2024-11-05 03:36:51.033144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:27.668 [2024-11-05 03:36:51.033153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:27.668 [2024-11-05 03:36:51.033162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.668 [2024-11-05 03:36:51.033171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:27.668 [2024-11-05 03:36:51.033179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:27.668 [2024-11-05 03:36:51.033188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:27.668 [2024-11-05 03:36:51.033197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:27.668 [2024-11-05 03:36:51.033205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:27.668 [2024-11-05 03:36:51.033214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:27.668 [2024-11-05 03:36:51.033223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:27.668 [2024-11-05 03:36:51.033232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:27.668 [2024-11-05 03:36:51.033241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.668 [2024-11-05 03:36:51.033249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:27.668 [2024-11-05 03:36:51.033263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:27.668 [2024-11-05 03:36:51.033280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.668 [2024-11-05 03:36:51.033303] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:27.668 [2024-11-05 03:36:51.033314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:27.668 [2024-11-05 03:36:51.033324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:27.668 [2024-11-05 03:36:51.033333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.668 [2024-11-05 03:36:51.033344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:27.668 [2024-11-05 03:36:51.033353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:27.668 [2024-11-05 03:36:51.033362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:27.668 [2024-11-05 03:36:51.033371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:27.668 [2024-11-05 03:36:51.033380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:27.668 [2024-11-05 03:36:51.033389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:27.668 [2024-11-05 03:36:51.033400] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:27.668 [2024-11-05 03:36:51.033412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:27.668 [2024-11-05 03:36:51.033424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:27.669 [2024-11-05 03:36:51.033435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:27.669 [2024-11-05 03:36:51.033445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:27.669 [2024-11-05 03:36:51.033455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:27.669 [2024-11-05 03:36:51.033465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:27.669 [2024-11-05 03:36:51.033475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:27.669 [2024-11-05 03:36:51.033486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:27.669 [2024-11-05 03:36:51.033495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:27.669 [2024-11-05 03:36:51.033505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:27.669 [2024-11-05 03:36:51.033515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:27.669 [2024-11-05 03:36:51.033526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:27.669 [2024-11-05 03:36:51.033536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:27.669 [2024-11-05 03:36:51.033546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:27.669 [2024-11-05 03:36:51.033556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:27.669 [2024-11-05 03:36:51.033566] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:27.669 [2024-11-05 03:36:51.033581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:27.669 [2024-11-05 03:36:51.033592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:27.669 [2024-11-05 03:36:51.033602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:27.669 [2024-11-05 03:36:51.033613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:27.669 [2024-11-05 03:36:51.033624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:27.669 [2024-11-05 03:36:51.033635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.669 [2024-11-05 03:36:51.033646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:27.669 [2024-11-05 03:36:51.033656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:26:27.669 [2024-11-05 03:36:51.033666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.669 [2024-11-05 03:36:51.072615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.669 [2024-11-05 03:36:51.072652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:27.669 [2024-11-05 03:36:51.072667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.965 ms 00:26:27.669 [2024-11-05 03:36:51.072678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.669 [2024-11-05 03:36:51.072762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.669 [2024-11-05 03:36:51.072774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:27.669 [2024-11-05 03:36:51.072786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:27.669 [2024-11-05 03:36:51.072795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.669 [2024-11-05 03:36:51.126346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.669 [2024-11-05 03:36:51.126384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:27.669 [2024-11-05 03:36:51.126398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.578 ms 00:26:27.669 [2024-11-05 03:36:51.126408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.669 [2024-11-05 03:36:51.126448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.669 [2024-11-05 03:36:51.126459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:27.669 [2024-11-05 03:36:51.126470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:27.669 [2024-11-05 03:36:51.126484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.669 [2024-11-05 03:36:51.126971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.669 [2024-11-05 03:36:51.126985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:27.669 [2024-11-05 03:36:51.126997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:26:27.669 [2024-11-05 03:36:51.127006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.669 [2024-11-05 03:36:51.127121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.669 [2024-11-05 03:36:51.127135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:27.669 [2024-11-05 03:36:51.127145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:26:27.669 [2024-11-05 03:36:51.127161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.669 [2024-11-05 03:36:51.146654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.669 [2024-11-05 03:36:51.146689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:27.669 [2024-11-05 03:36:51.146714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.504 ms 00:26:27.669 [2024-11-05 03:36:51.146725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.669 [2024-11-05 03:36:51.165736] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:27.669 [2024-11-05 03:36:51.165773] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:27.669 [2024-11-05 03:36:51.165790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.669 [2024-11-05 03:36:51.165800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:27.669 [2024-11-05 03:36:51.165812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.995 ms 00:26:27.669 [2024-11-05 03:36:51.165822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.669 [2024-11-05 03:36:51.195454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.669 [2024-11-05 03:36:51.195499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:27.669 [2024-11-05 03:36:51.195514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.639 ms 00:26:27.669 [2024-11-05 03:36:51.195525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.669 [2024-11-05 03:36:51.214026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.669 [2024-11-05 03:36:51.214075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:27.669 [2024-11-05 03:36:51.214088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.488 ms 00:26:27.669 [2024-11-05 03:36:51.214098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.669 [2024-11-05 03:36:51.232083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.669 [2024-11-05 03:36:51.232219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:27.669 [2024-11-05 03:36:51.232238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.977 ms 00:26:27.669 [2024-11-05 03:36:51.232248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.669 [2024-11-05 03:36:51.233035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.669 [2024-11-05 03:36:51.233068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:27.669 [2024-11-05 03:36:51.233087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:26:27.669 [2024-11-05 03:36:51.233101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.929 [2024-11-05 03:36:51.318045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.929 [2024-11-05 03:36:51.318282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:27.929 [2024-11-05 03:36:51.318323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.059 ms 00:26:27.929 [2024-11-05 03:36:51.318335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.929 [2024-11-05 03:36:51.329133] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:27.929 [2024-11-05 03:36:51.331694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.929 [2024-11-05 03:36:51.331724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:27.929 [2024-11-05 03:36:51.331737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.288 ms 00:26:27.929 [2024-11-05 03:36:51.331748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.929 [2024-11-05 03:36:51.331830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.929 [2024-11-05 03:36:51.331844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:27.929 [2024-11-05 03:36:51.331855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:27.929 [2024-11-05 03:36:51.331868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.929 [2024-11-05 03:36:51.333497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.929 [2024-11-05 03:36:51.333672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:27.929 [2024-11-05 03:36:51.333747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.585 ms 00:26:27.929 [2024-11-05 03:36:51.333783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.929 [2024-11-05 03:36:51.333832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.929 [2024-11-05 03:36:51.333864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:27.929 [2024-11-05 03:36:51.333894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:27.929 [2024-11-05 03:36:51.333923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.929 [2024-11-05 03:36:51.334003] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:27.929 [2024-11-05 03:36:51.334058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.929 [2024-11-05 03:36:51.334091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:27.929 [2024-11-05 03:36:51.334170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:27.929 [2024-11-05 03:36:51.334206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.929 [2024-11-05 03:36:51.372099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.929 [2024-11-05 03:36:51.372137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:27.929 [2024-11-05 03:36:51.372152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.892 ms 00:26:27.929 [2024-11-05 03:36:51.372168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.929 [2024-11-05 03:36:51.372239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.929 [2024-11-05 03:36:51.372252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:27.929 [2024-11-05 03:36:51.372264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:27.929 [2024-11-05 03:36:51.372274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.929 [2024-11-05 03:36:51.373360] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.625 ms, result 0 00:26:29.308  [2024-11-05T03:36:53.834Z] Copying: 1212/1048576 [kB] (1212 kBps) [2024-11-05T03:36:54.790Z] Copying: 9272/1048576 [kB] (8060 kBps) [2024-11-05T03:36:55.728Z] Copying: 45/1024 [MB] (36 MBps) [2024-11-05T03:36:56.666Z] Copying: 81/1024 [MB] (36 MBps) [2024-11-05T03:36:57.605Z] Copying: 116/1024 [MB] (35 MBps) [2024-11-05T03:36:58.984Z] Copying: 151/1024 [MB] (34 MBps) [2024-11-05T03:36:59.922Z] Copying: 185/1024 [MB] (33 MBps) [2024-11-05T03:37:00.858Z] Copying: 219/1024 [MB] (34 MBps) [2024-11-05T03:37:01.794Z] Copying: 254/1024 [MB] (34 MBps) [2024-11-05T03:37:02.731Z] Copying: 290/1024 [MB] (35 MBps) [2024-11-05T03:37:03.668Z] Copying: 325/1024 [MB] (35 MBps) [2024-11-05T03:37:04.604Z] Copying: 358/1024 [MB] (33 MBps) [2024-11-05T03:37:05.982Z] Copying: 393/1024 [MB] (34 MBps) [2024-11-05T03:37:06.920Z] Copying: 429/1024 [MB] (36 MBps) [2024-11-05T03:37:07.857Z] Copying: 465/1024 [MB] (36 MBps) [2024-11-05T03:37:08.799Z] Copying: 502/1024 [MB] (36 MBps) [2024-11-05T03:37:09.750Z] Copying: 538/1024 [MB] (36 MBps) [2024-11-05T03:37:10.687Z] Copying: 575/1024 [MB] (36 MBps) [2024-11-05T03:37:11.624Z] Copying: 611/1024 [MB] (36 MBps) [2024-11-05T03:37:13.001Z] Copying: 648/1024 [MB] (36 MBps) [2024-11-05T03:37:13.569Z] Copying: 684/1024 [MB] (36 MBps) [2024-11-05T03:37:14.945Z] Copying: 720/1024 [MB] (35 MBps) [2024-11-05T03:37:15.882Z] Copying: 756/1024 [MB] (36 MBps) [2024-11-05T03:37:16.818Z] Copying: 792/1024 [MB] (35 MBps) [2024-11-05T03:37:17.755Z] Copying: 828/1024 [MB] (36 MBps) [2024-11-05T03:37:18.692Z] Copying: 864/1024 [MB] (36 MBps) [2024-11-05T03:37:19.628Z] Copying: 900/1024 [MB] (35 MBps) [2024-11-05T03:37:20.564Z] Copying: 936/1024 [MB] (35 MBps) [2024-11-05T03:37:21.943Z] Copying: 971/1024 [MB] (35 MBps) [2024-11-05T03:37:22.203Z] Copying: 1007/1024 [MB] (35 MBps) [2024-11-05T03:37:23.631Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-11-05 03:37:23.463973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.047 [2024-11-05 03:37:23.464045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:00.047 [2024-11-05 03:37:23.464089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:00.047 [2024-11-05 03:37:23.464107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.047 [2024-11-05 03:37:23.464144] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:00.047 [2024-11-05 03:37:23.469871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.047 [2024-11-05 03:37:23.470232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:00.047 [2024-11-05 03:37:23.470258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.708 ms 00:27:00.047 [2024-11-05 03:37:23.470271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.047 [2024-11-05 03:37:23.470524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.047 [2024-11-05 03:37:23.470540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:00.047 [2024-11-05 03:37:23.470557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:27:00.047 [2024-11-05 03:37:23.470569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.047 [2024-11-05 03:37:23.482436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.047 [2024-11-05 03:37:23.482494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:00.047 [2024-11-05 03:37:23.482512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.866 ms 00:27:00.047 [2024-11-05 03:37:23.482524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.047 [2024-11-05 03:37:23.487961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.047 [2024-11-05 03:37:23.487998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:00.047 [2024-11-05 03:37:23.488011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.409 ms 00:27:00.047 [2024-11-05 03:37:23.488028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.047 [2024-11-05 03:37:23.525385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.047 [2024-11-05 03:37:23.525536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:00.047 [2024-11-05 03:37:23.525558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.353 ms 00:27:00.047 [2024-11-05 03:37:23.525569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.047 [2024-11-05 03:37:23.547025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.047 [2024-11-05 03:37:23.547066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:00.047 [2024-11-05 03:37:23.547079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.451 ms 00:27:00.047 [2024-11-05 03:37:23.547090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.047 [2024-11-05 03:37:23.549256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.047 [2024-11-05 03:37:23.549411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:00.047 [2024-11-05 03:37:23.549433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.122 ms 00:27:00.047 [2024-11-05 03:37:23.549444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.047 [2024-11-05 03:37:23.586361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.047 [2024-11-05 03:37:23.586412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:00.047 [2024-11-05 03:37:23.586427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.947 ms 00:27:00.047 [2024-11-05 03:37:23.586437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.047 [2024-11-05 03:37:23.622673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.047 [2024-11-05 03:37:23.622823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:00.047 [2024-11-05 03:37:23.622857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.256 ms 00:27:00.047 [2024-11-05 03:37:23.622867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.308 [2024-11-05 03:37:23.658480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.308 [2024-11-05 03:37:23.658517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:00.308 [2024-11-05 03:37:23.658530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.632 ms 00:27:00.308 [2024-11-05 03:37:23.658540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.309 [2024-11-05 03:37:23.694788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.309 [2024-11-05 03:37:23.694845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:00.309 [2024-11-05 03:37:23.694858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.226 ms 00:27:00.309 [2024-11-05 03:37:23.694868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.309 [2024-11-05 03:37:23.694906] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:00.309 [2024-11-05 03:37:23.694923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:00.309 [2024-11-05 03:37:23.694935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:00.309 [2024-11-05 03:37:23.694947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.694958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.694970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.694980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.694992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:00.309 [2024-11-05 03:37:23.695769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.695994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.696005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.696015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.696025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:00.310 [2024-11-05 03:37:23.696043] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:00.310 [2024-11-05 03:37:23.696053] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8afb070d-d72c-4e93-9c89-06087d4a9cf7 00:27:00.310 [2024-11-05 03:37:23.696064] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:00.310 [2024-11-05 03:37:23.696074] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 158144 00:27:00.310 [2024-11-05 03:37:23.696084] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 156160 00:27:00.310 [2024-11-05 03:37:23.696098] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:27:00.310 [2024-11-05 03:37:23.696108] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:00.310 [2024-11-05 03:37:23.696118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:00.310 [2024-11-05 03:37:23.696128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:00.310 [2024-11-05 03:37:23.696148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:00.310 [2024-11-05 03:37:23.696157] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:00.310 [2024-11-05 03:37:23.696166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.310 [2024-11-05 03:37:23.696177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:00.310 [2024-11-05 03:37:23.696187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:27:00.310 [2024-11-05 03:37:23.696198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.310 [2024-11-05 03:37:23.715618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.310 [2024-11-05 03:37:23.715658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:00.310 [2024-11-05 03:37:23.715671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.414 ms 00:27:00.310 [2024-11-05 03:37:23.715682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.310 [2024-11-05 03:37:23.716217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.310 [2024-11-05 03:37:23.716229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:00.310 [2024-11-05 03:37:23.716239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:27:00.310 [2024-11-05 03:37:23.716250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.310 [2024-11-05 03:37:23.768231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.310 [2024-11-05 03:37:23.768268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:00.310 [2024-11-05 03:37:23.768282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.310 [2024-11-05 03:37:23.768304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.310 [2024-11-05 03:37:23.768357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.310 [2024-11-05 03:37:23.768369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:00.310 [2024-11-05 03:37:23.768380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.310 [2024-11-05 03:37:23.768390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.310 [2024-11-05 03:37:23.768467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.310 [2024-11-05 03:37:23.768486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:00.310 [2024-11-05 03:37:23.768496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.310 [2024-11-05 03:37:23.768507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.310 [2024-11-05 03:37:23.768523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.310 [2024-11-05 03:37:23.768534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:00.310 [2024-11-05 03:37:23.768545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.310 [2024-11-05 03:37:23.768554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.570 [2024-11-05 03:37:23.892859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.570 [2024-11-05 03:37:23.892920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:00.570 [2024-11-05 03:37:23.892935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.570 [2024-11-05 03:37:23.892946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.570 [2024-11-05 03:37:23.994947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.570 [2024-11-05 03:37:23.994994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:00.570 [2024-11-05 03:37:23.995008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.570 [2024-11-05 03:37:23.995019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.571 [2024-11-05 03:37:23.995111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.571 [2024-11-05 03:37:23.995122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:00.571 [2024-11-05 03:37:23.995139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.571 [2024-11-05 03:37:23.995149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.571 [2024-11-05 03:37:23.995193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.571 [2024-11-05 03:37:23.995205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:00.571 [2024-11-05 03:37:23.995215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.571 [2024-11-05 03:37:23.995224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.571 [2024-11-05 03:37:23.995370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.571 [2024-11-05 03:37:23.995384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:00.571 [2024-11-05 03:37:23.995395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.571 [2024-11-05 03:37:23.995410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.571 [2024-11-05 03:37:23.995446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.571 [2024-11-05 03:37:23.995459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:00.571 [2024-11-05 03:37:23.995471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.571 [2024-11-05 03:37:23.995480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.571 [2024-11-05 03:37:23.995518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.571 [2024-11-05 03:37:23.995529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:00.571 [2024-11-05 03:37:23.995539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.571 [2024-11-05 03:37:23.995554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.571 [2024-11-05 03:37:23.995598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.571 [2024-11-05 03:37:23.995610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:00.571 [2024-11-05 03:37:23.995620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.571 [2024-11-05 03:37:23.995630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.571 [2024-11-05 03:37:23.995747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.615 ms, result 0 00:27:01.509 00:27:01.509 00:27:01.509 03:37:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:03.417 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:03.417 03:37:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:03.417 [2024-11-05 03:37:26.893829] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:27:03.417 [2024-11-05 03:37:26.893948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80295 ] 00:27:03.677 [2024-11-05 03:37:27.077089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.677 [2024-11-05 03:37:27.187102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.248 [2024-11-05 03:37:27.545413] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:04.248 [2024-11-05 03:37:27.545482] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:04.248 [2024-11-05 03:37:27.707002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.248 [2024-11-05 03:37:27.707195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:04.248 [2024-11-05 03:37:27.707227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:04.248 [2024-11-05 03:37:27.707238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.248 [2024-11-05 03:37:27.707315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.248 [2024-11-05 03:37:27.707328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:04.248 [2024-11-05 03:37:27.707343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:04.248 [2024-11-05 03:37:27.707353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.248 [2024-11-05 03:37:27.707377] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:04.248 [2024-11-05 03:37:27.708249] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:04.248 [2024-11-05 03:37:27.708270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.248 [2024-11-05 03:37:27.708280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:04.248 [2024-11-05 03:37:27.708305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:27:04.248 [2024-11-05 03:37:27.708315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.248 [2024-11-05 03:37:27.709737] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:04.248 [2024-11-05 03:37:27.728147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.248 [2024-11-05 03:37:27.728188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:04.248 [2024-11-05 03:37:27.728203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.440 ms 00:27:04.248 [2024-11-05 03:37:27.728214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.248 [2024-11-05 03:37:27.728280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.248 [2024-11-05 03:37:27.728307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:04.248 [2024-11-05 03:37:27.728319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:04.248 [2024-11-05 03:37:27.728329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.248 [2024-11-05 03:37:27.735182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.248 [2024-11-05 03:37:27.735214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:04.248 [2024-11-05 03:37:27.735227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.779 ms 00:27:04.248 [2024-11-05 03:37:27.735238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.248 [2024-11-05 03:37:27.735334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.248 [2024-11-05 03:37:27.735349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:04.248 [2024-11-05 03:37:27.735360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:04.248 [2024-11-05 03:37:27.735370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.248 [2024-11-05 03:37:27.735410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.248 [2024-11-05 03:37:27.735422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:04.248 [2024-11-05 03:37:27.735432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:04.249 [2024-11-05 03:37:27.735442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.249 [2024-11-05 03:37:27.735466] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:04.249 [2024-11-05 03:37:27.740583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.249 [2024-11-05 03:37:27.740721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:04.249 [2024-11-05 03:37:27.740850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.129 ms 00:27:04.249 [2024-11-05 03:37:27.740894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.249 [2024-11-05 03:37:27.740949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.249 [2024-11-05 03:37:27.740982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:04.249 [2024-11-05 03:37:27.741013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:04.249 [2024-11-05 03:37:27.741099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.249 [2024-11-05 03:37:27.741183] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:04.249 [2024-11-05 03:37:27.741231] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:04.249 [2024-11-05 03:37:27.741383] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:04.249 [2024-11-05 03:37:27.741464] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:04.249 [2024-11-05 03:37:27.741595] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:04.249 [2024-11-05 03:37:27.741697] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:04.249 [2024-11-05 03:37:27.741797] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:04.249 [2024-11-05 03:37:27.741853] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:04.249 [2024-11-05 03:37:27.741903] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:04.249 [2024-11-05 03:37:27.741953] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:04.249 [2024-11-05 03:37:27.742074] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:04.249 [2024-11-05 03:37:27.742160] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:04.249 [2024-11-05 03:37:27.742195] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:04.249 [2024-11-05 03:37:27.742214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.249 [2024-11-05 03:37:27.742225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:04.249 [2024-11-05 03:37:27.742236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:27:04.249 [2024-11-05 03:37:27.742246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.249 [2024-11-05 03:37:27.742347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.249 [2024-11-05 03:37:27.742359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:04.249 [2024-11-05 03:37:27.742370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:27:04.249 [2024-11-05 03:37:27.742380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.249 [2024-11-05 03:37:27.742474] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:04.249 [2024-11-05 03:37:27.742493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:04.249 [2024-11-05 03:37:27.742505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:04.249 [2024-11-05 03:37:27.742516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.249 [2024-11-05 03:37:27.742527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:04.249 [2024-11-05 03:37:27.742536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:04.249 [2024-11-05 03:37:27.742546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:04.249 [2024-11-05 03:37:27.742555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:04.249 [2024-11-05 03:37:27.742565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:04.249 [2024-11-05 03:37:27.742574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:04.249 [2024-11-05 03:37:27.742583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:04.249 [2024-11-05 03:37:27.742592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:04.249 [2024-11-05 03:37:27.742602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:04.249 [2024-11-05 03:37:27.742611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:04.249 [2024-11-05 03:37:27.742621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:04.249 [2024-11-05 03:37:27.742638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.249 [2024-11-05 03:37:27.742648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:04.249 [2024-11-05 03:37:27.742658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:04.249 [2024-11-05 03:37:27.742667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.249 [2024-11-05 03:37:27.742676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:04.249 [2024-11-05 03:37:27.742686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:04.249 [2024-11-05 03:37:27.742696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.249 [2024-11-05 03:37:27.742715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:04.249 [2024-11-05 03:37:27.742725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:04.249 [2024-11-05 03:37:27.742734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.249 [2024-11-05 03:37:27.742743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:04.249 [2024-11-05 03:37:27.742752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:04.249 [2024-11-05 03:37:27.742761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.249 [2024-11-05 03:37:27.742771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:04.249 [2024-11-05 03:37:27.742781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:04.249 [2024-11-05 03:37:27.742791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.249 [2024-11-05 03:37:27.742800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:04.249 [2024-11-05 03:37:27.742809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:04.249 [2024-11-05 03:37:27.742818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:04.249 [2024-11-05 03:37:27.742828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:04.249 [2024-11-05 03:37:27.742837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:04.249 [2024-11-05 03:37:27.742846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:04.249 [2024-11-05 03:37:27.742855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:04.249 [2024-11-05 03:37:27.742864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:04.249 [2024-11-05 03:37:27.742873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.249 [2024-11-05 03:37:27.742883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:04.249 [2024-11-05 03:37:27.742892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:04.249 [2024-11-05 03:37:27.742901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.249 [2024-11-05 03:37:27.742909] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:04.249 [2024-11-05 03:37:27.742919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:04.249 [2024-11-05 03:37:27.742929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:04.249 [2024-11-05 03:37:27.742938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.249 [2024-11-05 03:37:27.742949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:04.249 [2024-11-05 03:37:27.742958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:04.249 [2024-11-05 03:37:27.742967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:04.249 [2024-11-05 03:37:27.742976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:04.249 [2024-11-05 03:37:27.742985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:04.249 [2024-11-05 03:37:27.742994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:04.249 [2024-11-05 03:37:27.743006] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:04.249 [2024-11-05 03:37:27.743020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.249 [2024-11-05 03:37:27.743031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:04.249 [2024-11-05 03:37:27.743042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:04.249 [2024-11-05 03:37:27.743052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:04.249 [2024-11-05 03:37:27.743062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:04.249 [2024-11-05 03:37:27.743073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:04.249 [2024-11-05 03:37:27.743083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:04.250 [2024-11-05 03:37:27.743093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:04.250 [2024-11-05 03:37:27.743104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:04.250 [2024-11-05 03:37:27.743114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:04.250 [2024-11-05 03:37:27.743124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:04.250 [2024-11-05 03:37:27.743134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:04.250 [2024-11-05 03:37:27.743144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:04.250 [2024-11-05 03:37:27.743155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:04.250 [2024-11-05 03:37:27.743165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:04.250 [2024-11-05 03:37:27.743175] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:04.250 [2024-11-05 03:37:27.743190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.250 [2024-11-05 03:37:27.743201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:04.250 [2024-11-05 03:37:27.743211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:04.250 [2024-11-05 03:37:27.743221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:04.250 [2024-11-05 03:37:27.743232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:04.250 [2024-11-05 03:37:27.743242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.250 [2024-11-05 03:37:27.743253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:04.250 [2024-11-05 03:37:27.743263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:27:04.250 [2024-11-05 03:37:27.743273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.250 [2024-11-05 03:37:27.782107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.250 [2024-11-05 03:37:27.782144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:04.250 [2024-11-05 03:37:27.782158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.840 ms 00:27:04.250 [2024-11-05 03:37:27.782169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.250 [2024-11-05 03:37:27.782249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.250 [2024-11-05 03:37:27.782261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:04.250 [2024-11-05 03:37:27.782272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:04.250 [2024-11-05 03:37:27.782282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:27.845004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:27.845158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:04.511 [2024-11-05 03:37:27.845179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.754 ms 00:27:04.511 [2024-11-05 03:37:27.845190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:27.845226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:27.845237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:04.511 [2024-11-05 03:37:27.845249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:04.511 [2024-11-05 03:37:27.845266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:27.845769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:27.845783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:04.511 [2024-11-05 03:37:27.845795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:27:04.511 [2024-11-05 03:37:27.845805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:27.845923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:27.845937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:04.511 [2024-11-05 03:37:27.845948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:27:04.511 [2024-11-05 03:37:27.845964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:27.865143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:27.865179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:04.511 [2024-11-05 03:37:27.865195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.189 ms 00:27:04.511 [2024-11-05 03:37:27.865206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:27.884958] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:04.511 [2024-11-05 03:37:27.884995] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:04.511 [2024-11-05 03:37:27.885011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:27.885022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:04.511 [2024-11-05 03:37:27.885034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.712 ms 00:27:04.511 [2024-11-05 03:37:27.885044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:27.914674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:27.914727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:04.511 [2024-11-05 03:37:27.914742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.633 ms 00:27:04.511 [2024-11-05 03:37:27.914753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:27.933318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:27.933354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:04.511 [2024-11-05 03:37:27.933367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.546 ms 00:27:04.511 [2024-11-05 03:37:27.933376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:27.951540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:27.951576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:04.511 [2024-11-05 03:37:27.951588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.153 ms 00:27:04.511 [2024-11-05 03:37:27.951598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:27.952355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:27.952378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:04.511 [2024-11-05 03:37:27.952391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:27:04.511 [2024-11-05 03:37:27.952405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:28.038595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:28.038659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:04.511 [2024-11-05 03:37:28.038682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.308 ms 00:27:04.511 [2024-11-05 03:37:28.038694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:28.049607] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:04.511 [2024-11-05 03:37:28.052012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:28.052044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:04.511 [2024-11-05 03:37:28.052057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.283 ms 00:27:04.511 [2024-11-05 03:37:28.052068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:28.052149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:28.052163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:04.511 [2024-11-05 03:37:28.052174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:04.511 [2024-11-05 03:37:28.052188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:28.053075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:28.053097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:04.511 [2024-11-05 03:37:28.053108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:27:04.511 [2024-11-05 03:37:28.053118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:28.053146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:28.053157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:04.511 [2024-11-05 03:37:28.053168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:04.511 [2024-11-05 03:37:28.053179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:28.053213] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:04.511 [2024-11-05 03:37:28.053230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:28.053240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:04.511 [2024-11-05 03:37:28.053250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:04.511 [2024-11-05 03:37:28.053260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:28.089692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:28.089731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:04.511 [2024-11-05 03:37:28.089745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.465 ms 00:27:04.511 [2024-11-05 03:37:28.089762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.511 [2024-11-05 03:37:28.089837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.511 [2024-11-05 03:37:28.089849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:04.511 [2024-11-05 03:37:28.089861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:04.511 [2024-11-05 03:37:28.089871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.512 [2024-11-05 03:37:28.090991] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 384.122 ms, result 0 00:27:05.891  [2024-11-05T03:37:30.465Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-05T03:37:31.403Z] Copying: 57/1024 [MB] (28 MBps) [2024-11-05T03:37:32.340Z] Copying: 86/1024 [MB] (29 MBps) [2024-11-05T03:37:33.719Z] Copying: 115/1024 [MB] (28 MBps) [2024-11-05T03:37:34.657Z] Copying: 144/1024 [MB] (29 MBps) [2024-11-05T03:37:35.595Z] Copying: 172/1024 [MB] (27 MBps) [2024-11-05T03:37:36.532Z] Copying: 200/1024 [MB] (27 MBps) [2024-11-05T03:37:37.492Z] Copying: 227/1024 [MB] (27 MBps) [2024-11-05T03:37:38.435Z] Copying: 254/1024 [MB] (27 MBps) [2024-11-05T03:37:39.370Z] Copying: 282/1024 [MB] (27 MBps) [2024-11-05T03:37:40.305Z] Copying: 309/1024 [MB] (27 MBps) [2024-11-05T03:37:41.683Z] Copying: 337/1024 [MB] (27 MBps) [2024-11-05T03:37:42.619Z] Copying: 365/1024 [MB] (28 MBps) [2024-11-05T03:37:43.556Z] Copying: 393/1024 [MB] (27 MBps) [2024-11-05T03:37:44.493Z] Copying: 420/1024 [MB] (27 MBps) [2024-11-05T03:37:45.434Z] Copying: 448/1024 [MB] (27 MBps) [2024-11-05T03:37:46.369Z] Copying: 475/1024 [MB] (27 MBps) [2024-11-05T03:37:47.306Z] Copying: 502/1024 [MB] (26 MBps) [2024-11-05T03:37:48.684Z] Copying: 529/1024 [MB] (27 MBps) [2024-11-05T03:37:49.621Z] Copying: 556/1024 [MB] (26 MBps) [2024-11-05T03:37:50.558Z] Copying: 581/1024 [MB] (25 MBps) [2024-11-05T03:37:51.495Z] Copying: 606/1024 [MB] (24 MBps) [2024-11-05T03:37:52.432Z] Copying: 635/1024 [MB] (28 MBps) [2024-11-05T03:37:53.369Z] Copying: 664/1024 [MB] (28 MBps) [2024-11-05T03:37:54.312Z] Copying: 692/1024 [MB] (28 MBps) [2024-11-05T03:37:55.688Z] Copying: 720/1024 [MB] (28 MBps) [2024-11-05T03:37:56.625Z] Copying: 749/1024 [MB] (28 MBps) [2024-11-05T03:37:57.561Z] Copying: 778/1024 [MB] (28 MBps) [2024-11-05T03:37:58.497Z] Copying: 805/1024 [MB] (27 MBps) [2024-11-05T03:37:59.443Z] Copying: 833/1024 [MB] (27 MBps) [2024-11-05T03:38:00.378Z] Copying: 861/1024 [MB] (27 MBps) [2024-11-05T03:38:01.314Z] Copying: 889/1024 [MB] (27 MBps) [2024-11-05T03:38:02.691Z] Copying: 917/1024 [MB] (27 MBps) [2024-11-05T03:38:03.259Z] Copying: 946/1024 [MB] (28 MBps) [2024-11-05T03:38:04.636Z] Copying: 975/1024 [MB] (29 MBps) [2024-11-05T03:38:05.204Z] Copying: 1004/1024 [MB] (28 MBps) [2024-11-05T03:38:05.204Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-05 03:38:05.174841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.620 [2024-11-05 03:38:05.174920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:41.620 [2024-11-05 03:38:05.174944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:41.620 [2024-11-05 03:38:05.174960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.620 [2024-11-05 03:38:05.174992] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:41.620 [2024-11-05 03:38:05.180062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.620 [2024-11-05 03:38:05.180107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:41.620 [2024-11-05 03:38:05.180129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.050 ms 00:27:41.620 [2024-11-05 03:38:05.180140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.620 [2024-11-05 03:38:05.180375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.620 [2024-11-05 03:38:05.180390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:41.620 [2024-11-05 03:38:05.180402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:27:41.620 [2024-11-05 03:38:05.180412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.620 [2024-11-05 03:38:05.183714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.620 [2024-11-05 03:38:05.183895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:41.620 [2024-11-05 03:38:05.183933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.288 ms 00:27:41.620 [2024-11-05 03:38:05.183949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.620 [2024-11-05 03:38:05.190017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.620 [2024-11-05 03:38:05.190056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:41.620 [2024-11-05 03:38:05.190069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.029 ms 00:27:41.620 [2024-11-05 03:38:05.190080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.880 [2024-11-05 03:38:05.227507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.880 [2024-11-05 03:38:05.227547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:41.880 [2024-11-05 03:38:05.227562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.408 ms 00:27:41.880 [2024-11-05 03:38:05.227573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.880 [2024-11-05 03:38:05.248612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.880 [2024-11-05 03:38:05.248651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:41.880 [2024-11-05 03:38:05.248666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.031 ms 00:27:41.880 [2024-11-05 03:38:05.248676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.880 [2024-11-05 03:38:05.250374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.880 [2024-11-05 03:38:05.250539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:41.880 [2024-11-05 03:38:05.250566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.653 ms 00:27:41.880 [2024-11-05 03:38:05.250579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.880 [2024-11-05 03:38:05.287585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.880 [2024-11-05 03:38:05.287623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:41.880 [2024-11-05 03:38:05.287637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.038 ms 00:27:41.880 [2024-11-05 03:38:05.287646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.880 [2024-11-05 03:38:05.323677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.880 [2024-11-05 03:38:05.323727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:41.881 [2024-11-05 03:38:05.323740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.049 ms 00:27:41.881 [2024-11-05 03:38:05.323750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.881 [2024-11-05 03:38:05.358467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.881 [2024-11-05 03:38:05.358504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:41.881 [2024-11-05 03:38:05.358518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.734 ms 00:27:41.881 [2024-11-05 03:38:05.358527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.881 [2024-11-05 03:38:05.393922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.881 [2024-11-05 03:38:05.394091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:41.881 [2024-11-05 03:38:05.394121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.373 ms 00:27:41.881 [2024-11-05 03:38:05.394134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.881 [2024-11-05 03:38:05.394177] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:41.881 [2024-11-05 03:38:05.394196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:41.881 [2024-11-05 03:38:05.394214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:41.881 [2024-11-05 03:38:05.394226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:41.881 [2024-11-05 03:38:05.394990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:41.882 [2024-11-05 03:38:05.395321] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:41.882 [2024-11-05 03:38:05.395336] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8afb070d-d72c-4e93-9c89-06087d4a9cf7 00:27:41.882 [2024-11-05 03:38:05.395348] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:41.882 [2024-11-05 03:38:05.395358] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:41.882 [2024-11-05 03:38:05.395368] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:41.882 [2024-11-05 03:38:05.395378] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:41.882 [2024-11-05 03:38:05.395388] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:41.882 [2024-11-05 03:38:05.395399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:41.882 [2024-11-05 03:38:05.395418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:41.882 [2024-11-05 03:38:05.395427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:41.882 [2024-11-05 03:38:05.395436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:41.882 [2024-11-05 03:38:05.395447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.882 [2024-11-05 03:38:05.395457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:41.882 [2024-11-05 03:38:05.395468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:27:41.882 [2024-11-05 03:38:05.395478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.882 [2024-11-05 03:38:05.415584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.882 [2024-11-05 03:38:05.415620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:41.882 [2024-11-05 03:38:05.415642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.080 ms 00:27:41.882 [2024-11-05 03:38:05.415652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.882 [2024-11-05 03:38:05.416216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.882 [2024-11-05 03:38:05.416231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:41.882 [2024-11-05 03:38:05.416248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:27:41.882 [2024-11-05 03:38:05.416258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.141 [2024-11-05 03:38:05.468191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.141 [2024-11-05 03:38:05.468228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:42.141 [2024-11-05 03:38:05.468241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.141 [2024-11-05 03:38:05.468251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.141 [2024-11-05 03:38:05.468317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.141 [2024-11-05 03:38:05.468329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:42.141 [2024-11-05 03:38:05.468346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.141 [2024-11-05 03:38:05.468356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.141 [2024-11-05 03:38:05.468436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.141 [2024-11-05 03:38:05.468450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:42.141 [2024-11-05 03:38:05.468461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.141 [2024-11-05 03:38:05.468471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.141 [2024-11-05 03:38:05.468488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.141 [2024-11-05 03:38:05.468500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:42.141 [2024-11-05 03:38:05.468510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.141 [2024-11-05 03:38:05.468524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.141 [2024-11-05 03:38:05.594963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.141 [2024-11-05 03:38:05.595021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:42.141 [2024-11-05 03:38:05.595037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.141 [2024-11-05 03:38:05.595048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.141 [2024-11-05 03:38:05.694407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.141 [2024-11-05 03:38:05.694626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:42.141 [2024-11-05 03:38:05.694655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.141 [2024-11-05 03:38:05.694675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.141 [2024-11-05 03:38:05.694811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.141 [2024-11-05 03:38:05.694826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:42.142 [2024-11-05 03:38:05.694838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.142 [2024-11-05 03:38:05.694849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.142 [2024-11-05 03:38:05.694888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.142 [2024-11-05 03:38:05.694900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:42.142 [2024-11-05 03:38:05.694911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.142 [2024-11-05 03:38:05.694921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.142 [2024-11-05 03:38:05.695065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.142 [2024-11-05 03:38:05.695080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:42.142 [2024-11-05 03:38:05.695091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.142 [2024-11-05 03:38:05.695101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.142 [2024-11-05 03:38:05.695154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.142 [2024-11-05 03:38:05.695167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:42.142 [2024-11-05 03:38:05.695178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.142 [2024-11-05 03:38:05.695188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.142 [2024-11-05 03:38:05.695230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.142 [2024-11-05 03:38:05.695242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:42.142 [2024-11-05 03:38:05.695253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.142 [2024-11-05 03:38:05.695263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.142 [2024-11-05 03:38:05.695327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.142 [2024-11-05 03:38:05.695340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:42.142 [2024-11-05 03:38:05.695351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.142 [2024-11-05 03:38:05.695361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.142 [2024-11-05 03:38:05.695483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.462 ms, result 0 00:27:43.521 00:27:43.521 00:27:43.521 03:38:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:45.427 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:45.427 03:38:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:45.427 03:38:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:45.427 03:38:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:45.427 03:38:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:45.427 03:38:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:45.427 03:38:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:45.427 03:38:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:45.427 Process with pid 78552 is not found 00:27:45.427 03:38:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78552 00:27:45.427 03:38:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 78552 ']' 00:27:45.427 03:38:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 78552 00:27:45.427 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (78552) - No such process 00:27:45.427 03:38:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 78552 is not found' 00:27:45.427 03:38:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:45.687 Remove shared memory files 00:27:45.687 03:38:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:45.687 03:38:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:45.687 03:38:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:45.687 03:38:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:45.687 03:38:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:45.687 03:38:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:45.687 03:38:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:45.687 ************************************ 00:27:45.687 END TEST ftl_dirty_shutdown 00:27:45.687 ************************************ 00:27:45.687 00:27:45.687 real 3m27.830s 00:27:45.687 user 3m53.448s 00:27:45.687 sys 0m38.574s 00:27:45.687 03:38:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:45.687 03:38:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:45.688 03:38:09 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:45.688 03:38:09 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:45.688 03:38:09 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:45.688 03:38:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:45.688 ************************************ 00:27:45.688 START TEST ftl_upgrade_shutdown 00:27:45.688 ************************************ 00:27:45.688 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:45.948 * Looking for test storage... 00:27:45.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:45.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.948 --rc genhtml_branch_coverage=1 00:27:45.948 --rc genhtml_function_coverage=1 00:27:45.948 --rc genhtml_legend=1 00:27:45.948 --rc geninfo_all_blocks=1 00:27:45.948 --rc geninfo_unexecuted_blocks=1 00:27:45.948 00:27:45.948 ' 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:45.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.948 --rc genhtml_branch_coverage=1 00:27:45.948 --rc genhtml_function_coverage=1 00:27:45.948 --rc genhtml_legend=1 00:27:45.948 --rc geninfo_all_blocks=1 00:27:45.948 --rc geninfo_unexecuted_blocks=1 00:27:45.948 00:27:45.948 ' 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:45.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.948 --rc genhtml_branch_coverage=1 00:27:45.948 --rc genhtml_function_coverage=1 00:27:45.948 --rc genhtml_legend=1 00:27:45.948 --rc geninfo_all_blocks=1 00:27:45.948 --rc geninfo_unexecuted_blocks=1 00:27:45.948 00:27:45.948 ' 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:45.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.948 --rc genhtml_branch_coverage=1 00:27:45.948 --rc genhtml_function_coverage=1 00:27:45.948 --rc genhtml_legend=1 00:27:45.948 --rc geninfo_all_blocks=1 00:27:45.948 --rc geninfo_unexecuted_blocks=1 00:27:45.948 00:27:45.948 ' 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:45.948 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80792 00:27:45.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80792 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80792 ']' 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:45.949 03:38:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:46.217 [2024-11-05 03:38:09.560605] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:27:46.217 [2024-11-05 03:38:09.560747] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80792 ] 00:27:46.217 [2024-11-05 03:38:09.739928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.479 [2024-11-05 03:38:09.855780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.416 03:38:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:47.417 03:38:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:47.676 03:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:47.676 03:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:47.676 03:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:47.676 03:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:27:47.676 03:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:27:47.676 03:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:27:47.676 03:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:27:47.676 03:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:47.676 03:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:27:47.676 { 00:27:47.676 "name": "basen1", 00:27:47.676 "aliases": [ 00:27:47.676 "f7867012-0dca-4cce-86e6-5d063b0a1e85" 00:27:47.676 ], 00:27:47.676 "product_name": "NVMe disk", 00:27:47.676 "block_size": 4096, 00:27:47.676 "num_blocks": 1310720, 00:27:47.676 "uuid": "f7867012-0dca-4cce-86e6-5d063b0a1e85", 00:27:47.676 "numa_id": -1, 00:27:47.676 "assigned_rate_limits": { 00:27:47.676 "rw_ios_per_sec": 0, 00:27:47.676 "rw_mbytes_per_sec": 0, 00:27:47.676 "r_mbytes_per_sec": 0, 00:27:47.676 "w_mbytes_per_sec": 0 00:27:47.676 }, 00:27:47.676 "claimed": true, 00:27:47.676 "claim_type": "read_many_write_one", 00:27:47.676 "zoned": false, 00:27:47.676 "supported_io_types": { 00:27:47.676 "read": true, 00:27:47.676 "write": true, 00:27:47.676 "unmap": true, 00:27:47.676 "flush": true, 00:27:47.676 "reset": true, 00:27:47.676 "nvme_admin": true, 00:27:47.676 "nvme_io": true, 00:27:47.676 "nvme_io_md": false, 00:27:47.676 "write_zeroes": true, 00:27:47.676 "zcopy": false, 00:27:47.676 "get_zone_info": false, 00:27:47.676 "zone_management": false, 00:27:47.676 "zone_append": false, 00:27:47.676 "compare": true, 00:27:47.676 "compare_and_write": false, 00:27:47.676 "abort": true, 00:27:47.676 "seek_hole": false, 00:27:47.676 "seek_data": false, 00:27:47.676 "copy": true, 00:27:47.676 "nvme_iov_md": false 00:27:47.676 }, 00:27:47.676 "driver_specific": { 00:27:47.676 "nvme": [ 00:27:47.676 { 00:27:47.676 "pci_address": "0000:00:11.0", 00:27:47.676 "trid": { 00:27:47.676 "trtype": "PCIe", 00:27:47.676 "traddr": "0000:00:11.0" 00:27:47.676 }, 00:27:47.676 "ctrlr_data": { 00:27:47.676 "cntlid": 0, 00:27:47.676 "vendor_id": "0x1b36", 00:27:47.676 "model_number": "QEMU NVMe Ctrl", 00:27:47.676 "serial_number": "12341", 00:27:47.676 "firmware_revision": "8.0.0", 00:27:47.676 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:47.676 "oacs": { 00:27:47.676 "security": 0, 00:27:47.676 "format": 1, 00:27:47.676 "firmware": 0, 00:27:47.676 "ns_manage": 1 00:27:47.676 }, 00:27:47.676 "multi_ctrlr": false, 00:27:47.676 "ana_reporting": false 00:27:47.676 }, 00:27:47.676 "vs": { 00:27:47.676 "nvme_version": "1.4" 00:27:47.676 }, 00:27:47.676 "ns_data": { 00:27:47.676 "id": 1, 00:27:47.676 "can_share": false 00:27:47.676 } 00:27:47.676 } 00:27:47.676 ], 00:27:47.676 "mp_policy": "active_passive" 00:27:47.676 } 00:27:47.676 } 00:27:47.676 ]' 00:27:47.677 03:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:27:47.936 03:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:27:47.936 03:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:27:47.936 03:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:27:47.936 03:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:27:47.936 03:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:27:47.936 03:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:47.936 03:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:47.936 03:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:47.936 03:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:47.936 03:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:48.195 03:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=7a5c4c8d-369a-4c8a-8e81-4d6439ff05b5 00:27:48.195 03:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:48.195 03:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7a5c4c8d-369a-4c8a-8e81-4d6439ff05b5 00:27:48.195 03:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:48.454 03:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=3fbb344d-2509-4c17-be3d-83a11cb378f6 00:27:48.454 03:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 3fbb344d-2509-4c17-be3d-83a11cb378f6 00:27:48.713 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=13acad43-2113-44f5-8609-7c03c63f57e2 00:27:48.713 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 13acad43-2113-44f5-8609-7c03c63f57e2 ]] 00:27:48.713 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 13acad43-2113-44f5-8609-7c03c63f57e2 5120 00:27:48.713 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:48.713 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:48.713 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=13acad43-2113-44f5-8609-7c03c63f57e2 00:27:48.713 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:48.713 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 13acad43-2113-44f5-8609-7c03c63f57e2 00:27:48.713 03:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=13acad43-2113-44f5-8609-7c03c63f57e2 00:27:48.713 03:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:27:48.713 03:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:27:48.713 03:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:27:48.713 03:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 13acad43-2113-44f5-8609-7c03c63f57e2 00:27:48.973 03:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:27:48.973 { 00:27:48.973 "name": "13acad43-2113-44f5-8609-7c03c63f57e2", 00:27:48.973 "aliases": [ 00:27:48.973 "lvs/basen1p0" 00:27:48.973 ], 00:27:48.973 "product_name": "Logical Volume", 00:27:48.973 "block_size": 4096, 00:27:48.973 "num_blocks": 5242880, 00:27:48.973 "uuid": "13acad43-2113-44f5-8609-7c03c63f57e2", 00:27:48.973 "assigned_rate_limits": { 00:27:48.973 "rw_ios_per_sec": 0, 00:27:48.973 "rw_mbytes_per_sec": 0, 00:27:48.973 "r_mbytes_per_sec": 0, 00:27:48.973 "w_mbytes_per_sec": 0 00:27:48.973 }, 00:27:48.973 "claimed": false, 00:27:48.973 "zoned": false, 00:27:48.973 "supported_io_types": { 00:27:48.973 "read": true, 00:27:48.973 "write": true, 00:27:48.973 "unmap": true, 00:27:48.973 "flush": false, 00:27:48.973 "reset": true, 00:27:48.973 "nvme_admin": false, 00:27:48.973 "nvme_io": false, 00:27:48.973 "nvme_io_md": false, 00:27:48.973 "write_zeroes": true, 00:27:48.973 "zcopy": false, 00:27:48.973 "get_zone_info": false, 00:27:48.973 "zone_management": false, 00:27:48.973 "zone_append": false, 00:27:48.973 "compare": false, 00:27:48.973 "compare_and_write": false, 00:27:48.973 "abort": false, 00:27:48.973 "seek_hole": true, 00:27:48.973 "seek_data": true, 00:27:48.973 "copy": false, 00:27:48.973 "nvme_iov_md": false 00:27:48.973 }, 00:27:48.973 "driver_specific": { 00:27:48.973 "lvol": { 00:27:48.973 "lvol_store_uuid": "3fbb344d-2509-4c17-be3d-83a11cb378f6", 00:27:48.973 "base_bdev": "basen1", 00:27:48.973 "thin_provision": true, 00:27:48.973 "num_allocated_clusters": 0, 00:27:48.973 "snapshot": false, 00:27:48.973 "clone": false, 00:27:48.973 "esnap_clone": false 00:27:48.973 } 00:27:48.973 } 00:27:48.973 } 00:27:48.973 ]' 00:27:48.973 03:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:27:48.973 03:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:27:48.973 03:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:27:48.973 03:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:27:48.973 03:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:27:48.973 03:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:27:48.973 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:48.973 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:48.973 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:49.232 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:49.232 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:49.232 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:49.495 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:49.495 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:49.495 03:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 13acad43-2113-44f5-8609-7c03c63f57e2 -c cachen1p0 --l2p_dram_limit 2 00:27:49.755 [2024-11-05 03:38:13.121981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.755 [2024-11-05 03:38:13.122031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:49.755 [2024-11-05 03:38:13.122050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:49.755 [2024-11-05 03:38:13.122062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.755 [2024-11-05 03:38:13.122130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.755 [2024-11-05 03:38:13.122142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:49.755 [2024-11-05 03:38:13.122156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:27:49.755 [2024-11-05 03:38:13.122167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.755 [2024-11-05 03:38:13.122192] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:49.755 [2024-11-05 03:38:13.123214] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:49.755 [2024-11-05 03:38:13.123251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.755 [2024-11-05 03:38:13.123262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:49.755 [2024-11-05 03:38:13.123276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.062 ms 00:27:49.755 [2024-11-05 03:38:13.123304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.755 [2024-11-05 03:38:13.123395] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID a97cb0e9-943d-480c-a4e4-340d83fc0679 00:27:49.755 [2024-11-05 03:38:13.124822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.755 [2024-11-05 03:38:13.124860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:49.755 [2024-11-05 03:38:13.124872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:49.755 [2024-11-05 03:38:13.124885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.755 [2024-11-05 03:38:13.132268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.755 [2024-11-05 03:38:13.132306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:49.755 [2024-11-05 03:38:13.132322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.350 ms 00:27:49.755 [2024-11-05 03:38:13.132335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.755 [2024-11-05 03:38:13.132381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.755 [2024-11-05 03:38:13.132397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:49.755 [2024-11-05 03:38:13.132408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:49.755 [2024-11-05 03:38:13.132424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.755 [2024-11-05 03:38:13.132491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.755 [2024-11-05 03:38:13.132506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:49.755 [2024-11-05 03:38:13.132517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:49.755 [2024-11-05 03:38:13.132536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.755 [2024-11-05 03:38:13.132561] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:49.755 [2024-11-05 03:38:13.137895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.755 [2024-11-05 03:38:13.137929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:49.755 [2024-11-05 03:38:13.137946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.346 ms 00:27:49.755 [2024-11-05 03:38:13.137957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.755 [2024-11-05 03:38:13.137988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.755 [2024-11-05 03:38:13.138000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:49.755 [2024-11-05 03:38:13.138013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:49.755 [2024-11-05 03:38:13.138023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.755 [2024-11-05 03:38:13.138061] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:49.755 [2024-11-05 03:38:13.138190] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:49.755 [2024-11-05 03:38:13.138209] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:49.756 [2024-11-05 03:38:13.138222] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:49.756 [2024-11-05 03:38:13.138237] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:49.756 [2024-11-05 03:38:13.138250] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:49.756 [2024-11-05 03:38:13.138264] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:49.756 [2024-11-05 03:38:13.138275] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:49.756 [2024-11-05 03:38:13.138303] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:49.756 [2024-11-05 03:38:13.138314] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:49.756 [2024-11-05 03:38:13.138327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.756 [2024-11-05 03:38:13.138337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:49.756 [2024-11-05 03:38:13.138349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.267 ms 00:27:49.756 [2024-11-05 03:38:13.138360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.756 [2024-11-05 03:38:13.138441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.756 [2024-11-05 03:38:13.138453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:49.756 [2024-11-05 03:38:13.138467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:27:49.756 [2024-11-05 03:38:13.138487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.756 [2024-11-05 03:38:13.138582] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:49.756 [2024-11-05 03:38:13.138595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:49.756 [2024-11-05 03:38:13.138608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:49.756 [2024-11-05 03:38:13.138618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.756 [2024-11-05 03:38:13.138639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:49.756 [2024-11-05 03:38:13.138648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:49.756 [2024-11-05 03:38:13.138660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:49.756 [2024-11-05 03:38:13.138669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:49.756 [2024-11-05 03:38:13.138681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:49.756 [2024-11-05 03:38:13.138691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.756 [2024-11-05 03:38:13.138703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:49.756 [2024-11-05 03:38:13.138721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:49.756 [2024-11-05 03:38:13.138748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.756 [2024-11-05 03:38:13.138759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:49.756 [2024-11-05 03:38:13.138771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:49.756 [2024-11-05 03:38:13.138781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.756 [2024-11-05 03:38:13.138796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:49.756 [2024-11-05 03:38:13.138806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:49.756 [2024-11-05 03:38:13.138820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.756 [2024-11-05 03:38:13.138831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:49.756 [2024-11-05 03:38:13.138843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:49.756 [2024-11-05 03:38:13.138865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:49.756 [2024-11-05 03:38:13.138876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:49.756 [2024-11-05 03:38:13.138886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:49.756 [2024-11-05 03:38:13.138897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:49.756 [2024-11-05 03:38:13.138907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:49.756 [2024-11-05 03:38:13.138919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:49.756 [2024-11-05 03:38:13.138928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:49.756 [2024-11-05 03:38:13.138939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:49.756 [2024-11-05 03:38:13.138948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:49.756 [2024-11-05 03:38:13.138960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:49.756 [2024-11-05 03:38:13.138969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:49.756 [2024-11-05 03:38:13.138983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:49.756 [2024-11-05 03:38:13.138993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.756 [2024-11-05 03:38:13.139005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:49.756 [2024-11-05 03:38:13.139014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:49.756 [2024-11-05 03:38:13.139026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.756 [2024-11-05 03:38:13.139035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:49.756 [2024-11-05 03:38:13.139046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:49.756 [2024-11-05 03:38:13.139056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.756 [2024-11-05 03:38:13.139071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:49.756 [2024-11-05 03:38:13.139081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:49.756 [2024-11-05 03:38:13.139096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.756 [2024-11-05 03:38:13.139105] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:49.756 [2024-11-05 03:38:13.139120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:49.756 [2024-11-05 03:38:13.139132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:49.756 [2024-11-05 03:38:13.139148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.756 [2024-11-05 03:38:13.139159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:49.756 [2024-11-05 03:38:13.139173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:49.756 [2024-11-05 03:38:13.139182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:49.756 [2024-11-05 03:38:13.139194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:49.756 [2024-11-05 03:38:13.139204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:49.756 [2024-11-05 03:38:13.139216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:49.756 [2024-11-05 03:38:13.139248] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:49.756 [2024-11-05 03:38:13.139290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:49.756 [2024-11-05 03:38:13.139329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:49.756 [2024-11-05 03:38:13.139343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:49.756 [2024-11-05 03:38:13.139354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:49.756 [2024-11-05 03:38:13.139367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:49.756 [2024-11-05 03:38:13.139377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:49.756 [2024-11-05 03:38:13.139390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:49.756 [2024-11-05 03:38:13.139400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:49.756 [2024-11-05 03:38:13.139413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:49.756 [2024-11-05 03:38:13.139423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:49.756 [2024-11-05 03:38:13.139438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:49.756 [2024-11-05 03:38:13.139448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:49.756 [2024-11-05 03:38:13.139461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:49.756 [2024-11-05 03:38:13.139471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:49.756 [2024-11-05 03:38:13.139485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:49.756 [2024-11-05 03:38:13.139496] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:49.756 [2024-11-05 03:38:13.139510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:49.756 [2024-11-05 03:38:13.139521] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:49.756 [2024-11-05 03:38:13.139534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:49.756 [2024-11-05 03:38:13.139545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:49.756 [2024-11-05 03:38:13.139558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:49.756 [2024-11-05 03:38:13.139570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.756 [2024-11-05 03:38:13.139583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:49.756 [2024-11-05 03:38:13.139594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.047 ms 00:27:49.757 [2024-11-05 03:38:13.139607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.757 [2024-11-05 03:38:13.139653] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:49.757 [2024-11-05 03:38:13.139671] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:53.952 [2024-11-05 03:38:16.957625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:16.957691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:53.952 [2024-11-05 03:38:16.957709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3824.170 ms 00:27:53.952 [2024-11-05 03:38:16.957723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:16.997743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:16.997798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:53.952 [2024-11-05 03:38:16.997814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.673 ms 00:27:53.952 [2024-11-05 03:38:16.997828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:16.997917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:16.997933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:53.952 [2024-11-05 03:38:16.997944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:53.952 [2024-11-05 03:38:16.997960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:17.043597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:17.043645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:53.952 [2024-11-05 03:38:17.043660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.648 ms 00:27:53.952 [2024-11-05 03:38:17.043673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:17.043708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:17.043727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:53.952 [2024-11-05 03:38:17.043739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:53.952 [2024-11-05 03:38:17.043752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:17.044221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:17.044247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:53.952 [2024-11-05 03:38:17.044259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.415 ms 00:27:53.952 [2024-11-05 03:38:17.044272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:17.044333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:17.044348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:53.952 [2024-11-05 03:38:17.044362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:53.952 [2024-11-05 03:38:17.044377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:17.063358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:17.063405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:53.952 [2024-11-05 03:38:17.063420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.991 ms 00:27:53.952 [2024-11-05 03:38:17.063433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:17.076051] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:53.952 [2024-11-05 03:38:17.077096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:17.077125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:53.952 [2024-11-05 03:38:17.077141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.599 ms 00:27:53.952 [2024-11-05 03:38:17.077152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:17.118852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:17.118894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:53.952 [2024-11-05 03:38:17.118912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.733 ms 00:27:53.952 [2024-11-05 03:38:17.118924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:17.119013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:17.119030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:53.952 [2024-11-05 03:38:17.119047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:27:53.952 [2024-11-05 03:38:17.119057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:17.155649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:17.155688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:53.952 [2024-11-05 03:38:17.155706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.596 ms 00:27:53.952 [2024-11-05 03:38:17.155716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:17.192346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:17.192378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:53.952 [2024-11-05 03:38:17.192394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.638 ms 00:27:53.952 [2024-11-05 03:38:17.192404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:17.193057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:17.193084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:53.952 [2024-11-05 03:38:17.193098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.612 ms 00:27:53.952 [2024-11-05 03:38:17.193110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:17.296686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:17.296728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:53.952 [2024-11-05 03:38:17.296750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 103.680 ms 00:27:53.952 [2024-11-05 03:38:17.296762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:17.334062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.952 [2024-11-05 03:38:17.334108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:53.952 [2024-11-05 03:38:17.334138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.269 ms 00:27:53.952 [2024-11-05 03:38:17.334149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.952 [2024-11-05 03:38:17.371440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.953 [2024-11-05 03:38:17.371482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:53.953 [2024-11-05 03:38:17.371499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.302 ms 00:27:53.953 [2024-11-05 03:38:17.371510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.953 [2024-11-05 03:38:17.408756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.953 [2024-11-05 03:38:17.408795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:53.953 [2024-11-05 03:38:17.408812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.259 ms 00:27:53.953 [2024-11-05 03:38:17.408822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.953 [2024-11-05 03:38:17.408872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.953 [2024-11-05 03:38:17.408884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:53.953 [2024-11-05 03:38:17.408900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:53.953 [2024-11-05 03:38:17.408910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.953 [2024-11-05 03:38:17.409034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.953 [2024-11-05 03:38:17.409047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:53.953 [2024-11-05 03:38:17.409063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:53.953 [2024-11-05 03:38:17.409073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.953 [2024-11-05 03:38:17.410259] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4294.702 ms, result 0 00:27:53.953 { 00:27:53.953 "name": "ftl", 00:27:53.953 "uuid": "a97cb0e9-943d-480c-a4e4-340d83fc0679" 00:27:53.953 } 00:27:53.953 03:38:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:54.212 [2024-11-05 03:38:17.612969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.212 03:38:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:54.471 03:38:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:54.471 [2024-11-05 03:38:18.008640] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:54.471 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:54.730 [2024-11-05 03:38:18.218134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:54.730 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:55.299 Fill FTL, iteration 1 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80924 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80924 /var/tmp/spdk.tgt.sock 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80924 ']' 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:55.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:55.299 03:38:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:55.299 [2024-11-05 03:38:18.688493] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:27:55.299 [2024-11-05 03:38:18.689045] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80924 ] 00:27:55.299 [2024-11-05 03:38:18.867441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.559 [2024-11-05 03:38:18.980955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.496 03:38:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:56.496 03:38:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:56.496 03:38:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:56.754 ftln1 00:27:56.754 03:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:56.754 03:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:56.754 03:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:56.754 03:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80924 00:27:56.754 03:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80924 ']' 00:27:56.754 03:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80924 00:27:56.754 03:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:27:56.755 03:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:56.755 03:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80924 00:27:57.044 03:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:57.044 03:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:57.044 killing process with pid 80924 00:27:57.044 03:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80924' 00:27:57.044 03:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80924 00:27:57.044 03:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80924 00:27:59.601 03:38:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:59.601 03:38:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:59.601 [2024-11-05 03:38:22.749366] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:27:59.601 [2024-11-05 03:38:22.749484] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80978 ] 00:27:59.601 [2024-11-05 03:38:22.931398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.601 [2024-11-05 03:38:23.047078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.979  [2024-11-05T03:38:25.500Z] Copying: 245/1024 [MB] (245 MBps) [2024-11-05T03:38:26.879Z] Copying: 489/1024 [MB] (244 MBps) [2024-11-05T03:38:27.819Z] Copying: 732/1024 [MB] (243 MBps) [2024-11-05T03:38:27.819Z] Copying: 977/1024 [MB] (245 MBps) [2024-11-05T03:38:29.198Z] Copying: 1024/1024 [MB] (average 244 MBps) 00:28:05.614 00:28:05.614 Calculate MD5 checksum, iteration 1 00:28:05.614 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:05.614 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:05.614 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:05.614 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:05.614 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:05.614 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:05.614 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:05.614 03:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:05.614 [2024-11-05 03:38:28.936725] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:28:05.614 [2024-11-05 03:38:28.937599] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81042 ] 00:28:05.614 [2024-11-05 03:38:29.131754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.874 [2024-11-05 03:38:29.247296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.253  [2024-11-05T03:38:31.406Z] Copying: 668/1024 [MB] (668 MBps) [2024-11-05T03:38:32.344Z] Copying: 1024/1024 [MB] (average 655 MBps) 00:28:08.760 00:28:08.760 03:38:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:08.760 03:38:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:10.668 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:10.668 Fill FTL, iteration 2 00:28:10.668 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=1c9f1d51ea9dbe1cdd8bb59689235ec3 00:28:10.668 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:10.668 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:10.668 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:10.668 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:10.668 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:10.668 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:10.668 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:10.668 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:10.668 03:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:10.668 [2024-11-05 03:38:34.009740] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:28:10.668 [2024-11-05 03:38:34.009895] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81098 ] 00:28:10.668 [2024-11-05 03:38:34.216702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.928 [2024-11-05 03:38:34.340391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.308  [2024-11-05T03:38:36.830Z] Copying: 240/1024 [MB] (240 MBps) [2024-11-05T03:38:38.209Z] Copying: 468/1024 [MB] (228 MBps) [2024-11-05T03:38:39.146Z] Copying: 693/1024 [MB] (225 MBps) [2024-11-05T03:38:39.405Z] Copying: 920/1024 [MB] (227 MBps) [2024-11-05T03:38:40.783Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:28:17.199 00:28:17.199 Calculate MD5 checksum, iteration 2 00:28:17.199 03:38:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:17.199 03:38:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:17.199 03:38:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:17.199 03:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:17.199 03:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:17.199 03:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:17.199 03:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:17.199 03:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:17.199 [2024-11-05 03:38:40.501504] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:28:17.199 [2024-11-05 03:38:40.501768] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81164 ] 00:28:17.199 [2024-11-05 03:38:40.683674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.458 [2024-11-05 03:38:40.804403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.365  [2024-11-05T03:38:43.209Z] Copying: 675/1024 [MB] (675 MBps) [2024-11-05T03:38:44.590Z] Copying: 1024/1024 [MB] (average 659 MBps) 00:28:21.006 00:28:21.006 03:38:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:21.006 03:38:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:22.912 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:22.912 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=442965615c582959c96eea1128c97096 00:28:22.912 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:22.912 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:22.912 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:22.912 [2024-11-05 03:38:46.227976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.912 [2024-11-05 03:38:46.228027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:22.912 [2024-11-05 03:38:46.228044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:22.912 [2024-11-05 03:38:46.228056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.912 [2024-11-05 03:38:46.228092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.912 [2024-11-05 03:38:46.228104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:22.912 [2024-11-05 03:38:46.228115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:22.912 [2024-11-05 03:38:46.228130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.912 [2024-11-05 03:38:46.228151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.912 [2024-11-05 03:38:46.228163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:22.912 [2024-11-05 03:38:46.228173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:22.912 [2024-11-05 03:38:46.228183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.912 [2024-11-05 03:38:46.228250] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.274 ms, result 0 00:28:22.912 true 00:28:22.912 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:22.912 { 00:28:22.912 "name": "ftl", 00:28:22.912 "properties": [ 00:28:22.912 { 00:28:22.912 "name": "superblock_version", 00:28:22.912 "value": 5, 00:28:22.912 "read-only": true 00:28:22.912 }, 00:28:22.912 { 00:28:22.912 "name": "base_device", 00:28:22.912 "bands": [ 00:28:22.912 { 00:28:22.912 "id": 0, 00:28:22.912 "state": "FREE", 00:28:22.912 "validity": 0.0 00:28:22.912 }, 00:28:22.912 { 00:28:22.912 "id": 1, 00:28:22.912 "state": "FREE", 00:28:22.912 "validity": 0.0 00:28:22.912 }, 00:28:22.912 { 00:28:22.912 "id": 2, 00:28:22.912 "state": "FREE", 00:28:22.912 "validity": 0.0 00:28:22.912 }, 00:28:22.912 { 00:28:22.912 "id": 3, 00:28:22.912 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 4, 00:28:22.913 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 5, 00:28:22.913 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 6, 00:28:22.913 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 7, 00:28:22.913 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 8, 00:28:22.913 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 9, 00:28:22.913 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 10, 00:28:22.913 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 11, 00:28:22.913 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 12, 00:28:22.913 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 13, 00:28:22.913 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 14, 00:28:22.913 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 15, 00:28:22.913 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 16, 00:28:22.913 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 17, 00:28:22.913 "state": "FREE", 00:28:22.913 "validity": 0.0 00:28:22.913 } 00:28:22.913 ], 00:28:22.913 "read-only": true 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "name": "cache_device", 00:28:22.913 "type": "bdev", 00:28:22.913 "chunks": [ 00:28:22.913 { 00:28:22.913 "id": 0, 00:28:22.913 "state": "INACTIVE", 00:28:22.913 "utilization": 0.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 1, 00:28:22.913 "state": "CLOSED", 00:28:22.913 "utilization": 1.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 2, 00:28:22.913 "state": "CLOSED", 00:28:22.913 "utilization": 1.0 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 3, 00:28:22.913 "state": "OPEN", 00:28:22.913 "utilization": 0.001953125 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "id": 4, 00:28:22.913 "state": "OPEN", 00:28:22.913 "utilization": 0.0 00:28:22.913 } 00:28:22.913 ], 00:28:22.913 "read-only": true 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "name": "verbose_mode", 00:28:22.913 "value": true, 00:28:22.913 "unit": "", 00:28:22.913 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:22.913 }, 00:28:22.913 { 00:28:22.913 "name": "prep_upgrade_on_shutdown", 00:28:22.913 "value": false, 00:28:22.913 "unit": "", 00:28:22.913 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:22.913 } 00:28:22.913 ] 00:28:22.913 } 00:28:22.913 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:23.172 [2024-11-05 03:38:46.643640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.172 [2024-11-05 03:38:46.643851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:23.172 [2024-11-05 03:38:46.643977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:23.172 [2024-11-05 03:38:46.644015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.172 [2024-11-05 03:38:46.644080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.172 [2024-11-05 03:38:46.644115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:23.172 [2024-11-05 03:38:46.644146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:23.172 [2024-11-05 03:38:46.644175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.172 [2024-11-05 03:38:46.644284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.172 [2024-11-05 03:38:46.644338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:23.172 [2024-11-05 03:38:46.644370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:23.172 [2024-11-05 03:38:46.644400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.172 [2024-11-05 03:38:46.644489] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.832 ms, result 0 00:28:23.172 true 00:28:23.172 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:23.172 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:23.172 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:23.431 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:23.432 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:23.432 03:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:23.691 [2024-11-05 03:38:47.099546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.691 [2024-11-05 03:38:47.099593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:23.691 [2024-11-05 03:38:47.099609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:23.691 [2024-11-05 03:38:47.099620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.691 [2024-11-05 03:38:47.099645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.691 [2024-11-05 03:38:47.099657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:23.691 [2024-11-05 03:38:47.099667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:23.691 [2024-11-05 03:38:47.099677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.691 [2024-11-05 03:38:47.099697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.691 [2024-11-05 03:38:47.099708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:23.691 [2024-11-05 03:38:47.099719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:23.691 [2024-11-05 03:38:47.099729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.691 [2024-11-05 03:38:47.099788] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.230 ms, result 0 00:28:23.691 true 00:28:23.691 03:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:23.950 { 00:28:23.950 "name": "ftl", 00:28:23.950 "properties": [ 00:28:23.950 { 00:28:23.950 "name": "superblock_version", 00:28:23.950 "value": 5, 00:28:23.950 "read-only": true 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "name": "base_device", 00:28:23.950 "bands": [ 00:28:23.950 { 00:28:23.950 "id": 0, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "id": 1, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "id": 2, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "id": 3, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "id": 4, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "id": 5, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "id": 6, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "id": 7, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "id": 8, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "id": 9, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "id": 10, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "id": 11, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "id": 12, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "id": 13, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.950 }, 00:28:23.950 { 00:28:23.950 "id": 14, 00:28:23.950 "state": "FREE", 00:28:23.950 "validity": 0.0 00:28:23.951 }, 00:28:23.951 { 00:28:23.951 "id": 15, 00:28:23.951 "state": "FREE", 00:28:23.951 "validity": 0.0 00:28:23.951 }, 00:28:23.951 { 00:28:23.951 "id": 16, 00:28:23.951 "state": "FREE", 00:28:23.951 "validity": 0.0 00:28:23.951 }, 00:28:23.951 { 00:28:23.951 "id": 17, 00:28:23.951 "state": "FREE", 00:28:23.951 "validity": 0.0 00:28:23.951 } 00:28:23.951 ], 00:28:23.951 "read-only": true 00:28:23.951 }, 00:28:23.951 { 00:28:23.951 "name": "cache_device", 00:28:23.951 "type": "bdev", 00:28:23.951 "chunks": [ 00:28:23.951 { 00:28:23.951 "id": 0, 00:28:23.951 "state": "INACTIVE", 00:28:23.951 "utilization": 0.0 00:28:23.951 }, 00:28:23.951 { 00:28:23.951 "id": 1, 00:28:23.951 "state": "CLOSED", 00:28:23.951 "utilization": 1.0 00:28:23.951 }, 00:28:23.951 { 00:28:23.951 "id": 2, 00:28:23.951 "state": "CLOSED", 00:28:23.951 "utilization": 1.0 00:28:23.951 }, 00:28:23.951 { 00:28:23.951 "id": 3, 00:28:23.951 "state": "OPEN", 00:28:23.951 "utilization": 0.001953125 00:28:23.951 }, 00:28:23.951 { 00:28:23.951 "id": 4, 00:28:23.951 "state": "OPEN", 00:28:23.951 "utilization": 0.0 00:28:23.951 } 00:28:23.951 ], 00:28:23.951 "read-only": true 00:28:23.951 }, 00:28:23.951 { 00:28:23.951 "name": "verbose_mode", 00:28:23.951 "value": true, 00:28:23.951 "unit": "", 00:28:23.951 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:23.951 }, 00:28:23.951 { 00:28:23.951 "name": "prep_upgrade_on_shutdown", 00:28:23.951 "value": true, 00:28:23.951 "unit": "", 00:28:23.951 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:23.951 } 00:28:23.951 ] 00:28:23.951 } 00:28:23.951 03:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:23.951 03:38:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80792 ]] 00:28:23.951 03:38:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80792 00:28:23.951 03:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80792 ']' 00:28:23.951 03:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80792 00:28:23.951 03:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:23.951 03:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:23.951 03:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80792 00:28:23.951 killing process with pid 80792 00:28:23.951 03:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:23.951 03:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:23.951 03:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80792' 00:28:23.951 03:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80792 00:28:23.951 03:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80792 00:28:25.332 [2024-11-05 03:38:48.476268] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:25.332 [2024-11-05 03:38:48.495734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.332 [2024-11-05 03:38:48.495777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:25.332 [2024-11-05 03:38:48.495794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:25.332 [2024-11-05 03:38:48.495804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.332 [2024-11-05 03:38:48.495826] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:25.332 [2024-11-05 03:38:48.500022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.332 [2024-11-05 03:38:48.500051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:25.332 [2024-11-05 03:38:48.500064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.186 ms 00:28:25.332 [2024-11-05 03:38:48.500075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.478 [2024-11-05 03:38:55.682298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.478 [2024-11-05 03:38:55.682355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:33.478 [2024-11-05 03:38:55.682373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7193.851 ms 00:28:33.478 [2024-11-05 03:38:55.682389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.478 [2024-11-05 03:38:55.683535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.478 [2024-11-05 03:38:55.683560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:33.478 [2024-11-05 03:38:55.683572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.129 ms 00:28:33.479 [2024-11-05 03:38:55.683583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.684513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.479 [2024-11-05 03:38:55.684530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:33.479 [2024-11-05 03:38:55.684543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.903 ms 00:28:33.479 [2024-11-05 03:38:55.684553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.699952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.479 [2024-11-05 03:38:55.699987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:33.479 [2024-11-05 03:38:55.700000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.383 ms 00:28:33.479 [2024-11-05 03:38:55.700012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.709242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.479 [2024-11-05 03:38:55.709280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:33.479 [2024-11-05 03:38:55.709307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.209 ms 00:28:33.479 [2024-11-05 03:38:55.709318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.709397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.479 [2024-11-05 03:38:55.709422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:33.479 [2024-11-05 03:38:55.709440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:28:33.479 [2024-11-05 03:38:55.709450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.724332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.479 [2024-11-05 03:38:55.724473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:33.479 [2024-11-05 03:38:55.724494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.889 ms 00:28:33.479 [2024-11-05 03:38:55.724505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.739550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.479 [2024-11-05 03:38:55.739691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:33.479 [2024-11-05 03:38:55.739711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.032 ms 00:28:33.479 [2024-11-05 03:38:55.739721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.754753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.479 [2024-11-05 03:38:55.754881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:33.479 [2024-11-05 03:38:55.754900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.020 ms 00:28:33.479 [2024-11-05 03:38:55.754910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.769385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.479 [2024-11-05 03:38:55.769509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:33.479 [2024-11-05 03:38:55.769528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.421 ms 00:28:33.479 [2024-11-05 03:38:55.769538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.769571] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:33.479 [2024-11-05 03:38:55.769588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:33.479 [2024-11-05 03:38:55.769600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:33.479 [2024-11-05 03:38:55.769624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:33.479 [2024-11-05 03:38:55.769635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:33.479 [2024-11-05 03:38:55.769794] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:33.479 [2024-11-05 03:38:55.769804] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a97cb0e9-943d-480c-a4e4-340d83fc0679 00:28:33.479 [2024-11-05 03:38:55.769814] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:33.479 [2024-11-05 03:38:55.769823] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:33.479 [2024-11-05 03:38:55.769840] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:33.479 [2024-11-05 03:38:55.769850] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:33.479 [2024-11-05 03:38:55.769860] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:33.479 [2024-11-05 03:38:55.769876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:33.479 [2024-11-05 03:38:55.769886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:33.479 [2024-11-05 03:38:55.769894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:33.479 [2024-11-05 03:38:55.769903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:33.479 [2024-11-05 03:38:55.769914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.479 [2024-11-05 03:38:55.769931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:33.479 [2024-11-05 03:38:55.769942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.344 ms 00:28:33.479 [2024-11-05 03:38:55.769952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.790521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.479 [2024-11-05 03:38:55.790554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:33.479 [2024-11-05 03:38:55.790566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.572 ms 00:28:33.479 [2024-11-05 03:38:55.790583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.791084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.479 [2024-11-05 03:38:55.791096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:33.479 [2024-11-05 03:38:55.791107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.481 ms 00:28:33.479 [2024-11-05 03:38:55.791118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.855895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.479 [2024-11-05 03:38:55.855930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:33.479 [2024-11-05 03:38:55.855948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.479 [2024-11-05 03:38:55.855959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.855990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.479 [2024-11-05 03:38:55.856001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:33.479 [2024-11-05 03:38:55.856011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.479 [2024-11-05 03:38:55.856022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.856109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.479 [2024-11-05 03:38:55.856123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:33.479 [2024-11-05 03:38:55.856135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.479 [2024-11-05 03:38:55.856145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.856167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.479 [2024-11-05 03:38:55.856178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:33.479 [2024-11-05 03:38:55.856188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.479 [2024-11-05 03:38:55.856198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:55.981710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.479 [2024-11-05 03:38:55.981768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:33.479 [2024-11-05 03:38:55.981784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.479 [2024-11-05 03:38:55.981800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:56.085154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.479 [2024-11-05 03:38:56.085212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:33.479 [2024-11-05 03:38:56.085227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.479 [2024-11-05 03:38:56.085238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:56.085363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.479 [2024-11-05 03:38:56.085377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:33.479 [2024-11-05 03:38:56.085389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.479 [2024-11-05 03:38:56.085400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.479 [2024-11-05 03:38:56.085467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.479 [2024-11-05 03:38:56.085480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:33.479 [2024-11-05 03:38:56.085491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.479 [2024-11-05 03:38:56.085501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.480 [2024-11-05 03:38:56.085607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.480 [2024-11-05 03:38:56.085620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:33.480 [2024-11-05 03:38:56.085637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.480 [2024-11-05 03:38:56.085648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.480 [2024-11-05 03:38:56.085687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.480 [2024-11-05 03:38:56.085703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:33.480 [2024-11-05 03:38:56.085714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.480 [2024-11-05 03:38:56.085724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.480 [2024-11-05 03:38:56.085763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.480 [2024-11-05 03:38:56.085775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:33.480 [2024-11-05 03:38:56.085785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.480 [2024-11-05 03:38:56.085795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.480 [2024-11-05 03:38:56.085840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.480 [2024-11-05 03:38:56.085859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:33.480 [2024-11-05 03:38:56.085870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.480 [2024-11-05 03:38:56.085880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.480 [2024-11-05 03:38:56.086020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7602.574 ms, result 0 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81359 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81359 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81359 ']' 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:36.770 03:39:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:37.029 [2024-11-05 03:39:00.382603] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:28:37.029 [2024-11-05 03:39:00.382737] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81359 ] 00:28:37.029 [2024-11-05 03:39:00.565670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.287 [2024-11-05 03:39:00.680961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.226 [2024-11-05 03:39:01.624408] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:38.226 [2024-11-05 03:39:01.624481] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:38.226 [2024-11-05 03:39:01.770902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.226 [2024-11-05 03:39:01.771083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:38.226 [2024-11-05 03:39:01.771126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:38.226 [2024-11-05 03:39:01.771138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.227 [2024-11-05 03:39:01.771208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.227 [2024-11-05 03:39:01.771221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:38.227 [2024-11-05 03:39:01.771233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:28:38.227 [2024-11-05 03:39:01.771243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.227 [2024-11-05 03:39:01.771275] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:38.227 [2024-11-05 03:39:01.772387] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:38.227 [2024-11-05 03:39:01.772415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.227 [2024-11-05 03:39:01.772427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:38.227 [2024-11-05 03:39:01.772438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.153 ms 00:28:38.227 [2024-11-05 03:39:01.772448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.227 [2024-11-05 03:39:01.774012] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:38.227 [2024-11-05 03:39:01.793418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.227 [2024-11-05 03:39:01.793457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:38.227 [2024-11-05 03:39:01.793477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.438 ms 00:28:38.227 [2024-11-05 03:39:01.793488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.227 [2024-11-05 03:39:01.793548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.227 [2024-11-05 03:39:01.793561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:38.227 [2024-11-05 03:39:01.793572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:38.227 [2024-11-05 03:39:01.793582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.227 [2024-11-05 03:39:01.800343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.227 [2024-11-05 03:39:01.800499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:38.227 [2024-11-05 03:39:01.800521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.690 ms 00:28:38.227 [2024-11-05 03:39:01.800532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.227 [2024-11-05 03:39:01.800604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.227 [2024-11-05 03:39:01.800617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:38.227 [2024-11-05 03:39:01.800628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:28:38.227 [2024-11-05 03:39:01.800638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.227 [2024-11-05 03:39:01.800682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.227 [2024-11-05 03:39:01.800695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:38.227 [2024-11-05 03:39:01.800710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:38.227 [2024-11-05 03:39:01.800719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.227 [2024-11-05 03:39:01.800746] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:38.227 [2024-11-05 03:39:01.805577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.227 [2024-11-05 03:39:01.805611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:38.227 [2024-11-05 03:39:01.805623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.845 ms 00:28:38.227 [2024-11-05 03:39:01.805637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.227 [2024-11-05 03:39:01.805665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.227 [2024-11-05 03:39:01.805676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:38.227 [2024-11-05 03:39:01.805687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:38.227 [2024-11-05 03:39:01.805698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.227 [2024-11-05 03:39:01.805753] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:38.227 [2024-11-05 03:39:01.805777] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:38.227 [2024-11-05 03:39:01.805816] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:38.227 [2024-11-05 03:39:01.805833] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:38.227 [2024-11-05 03:39:01.805921] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:38.227 [2024-11-05 03:39:01.805934] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:38.227 [2024-11-05 03:39:01.805947] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:38.227 [2024-11-05 03:39:01.805960] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:38.227 [2024-11-05 03:39:01.805972] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:38.227 [2024-11-05 03:39:01.805987] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:38.227 [2024-11-05 03:39:01.805997] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:38.227 [2024-11-05 03:39:01.806007] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:38.227 [2024-11-05 03:39:01.806017] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:38.227 [2024-11-05 03:39:01.806027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.227 [2024-11-05 03:39:01.806037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:38.227 [2024-11-05 03:39:01.806047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.277 ms 00:28:38.227 [2024-11-05 03:39:01.806056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.227 [2024-11-05 03:39:01.806130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.227 [2024-11-05 03:39:01.806141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:38.227 [2024-11-05 03:39:01.806151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:28:38.227 [2024-11-05 03:39:01.806164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.227 [2024-11-05 03:39:01.806253] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:38.227 [2024-11-05 03:39:01.806266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:38.227 [2024-11-05 03:39:01.806276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:38.227 [2024-11-05 03:39:01.806307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:38.227 [2024-11-05 03:39:01.806318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:38.227 [2024-11-05 03:39:01.806327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:38.227 [2024-11-05 03:39:01.806336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:38.227 [2024-11-05 03:39:01.806346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:38.227 [2024-11-05 03:39:01.806357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:38.227 [2024-11-05 03:39:01.806366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:38.227 [2024-11-05 03:39:01.806380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:38.227 [2024-11-05 03:39:01.806389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:38.227 [2024-11-05 03:39:01.806398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:38.227 [2024-11-05 03:39:01.806407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:38.227 [2024-11-05 03:39:01.806417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:38.227 [2024-11-05 03:39:01.806426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:38.227 [2024-11-05 03:39:01.806435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:38.227 [2024-11-05 03:39:01.806445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:38.227 [2024-11-05 03:39:01.806453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:38.227 [2024-11-05 03:39:01.806463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:38.227 [2024-11-05 03:39:01.806472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:38.227 [2024-11-05 03:39:01.806481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:38.227 [2024-11-05 03:39:01.806490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:38.227 [2024-11-05 03:39:01.806499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:38.227 [2024-11-05 03:39:01.806516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:38.227 [2024-11-05 03:39:01.806537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:38.227 [2024-11-05 03:39:01.806546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:38.227 [2024-11-05 03:39:01.806555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:38.227 [2024-11-05 03:39:01.806564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:38.227 [2024-11-05 03:39:01.806574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:38.227 [2024-11-05 03:39:01.806582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:38.227 [2024-11-05 03:39:01.806591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:38.227 [2024-11-05 03:39:01.806601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:38.227 [2024-11-05 03:39:01.806610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:38.227 [2024-11-05 03:39:01.806619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:38.227 [2024-11-05 03:39:01.806628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:38.227 [2024-11-05 03:39:01.806637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:38.227 [2024-11-05 03:39:01.806647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:38.227 [2024-11-05 03:39:01.806656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:38.228 [2024-11-05 03:39:01.806665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:38.228 [2024-11-05 03:39:01.806674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:38.228 [2024-11-05 03:39:01.806683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:38.228 [2024-11-05 03:39:01.806694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:38.228 [2024-11-05 03:39:01.806703] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:38.228 [2024-11-05 03:39:01.806721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:38.228 [2024-11-05 03:39:01.806731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:38.228 [2024-11-05 03:39:01.806743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:38.228 [2024-11-05 03:39:01.806765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:38.228 [2024-11-05 03:39:01.806776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:38.228 [2024-11-05 03:39:01.806790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:38.228 [2024-11-05 03:39:01.806804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:38.228 [2024-11-05 03:39:01.806815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:38.228 [2024-11-05 03:39:01.806830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:38.228 [2024-11-05 03:39:01.806845] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:38.228 [2024-11-05 03:39:01.806863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:38.228 [2024-11-05 03:39:01.806884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:38.228 [2024-11-05 03:39:01.806902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:38.228 [2024-11-05 03:39:01.806920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:38.228 [2024-11-05 03:39:01.806937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:38.228 [2024-11-05 03:39:01.806955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:38.228 [2024-11-05 03:39:01.806969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:38.228 [2024-11-05 03:39:01.806980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:38.228 [2024-11-05 03:39:01.806990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:38.228 [2024-11-05 03:39:01.807001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:38.228 [2024-11-05 03:39:01.807010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:38.228 [2024-11-05 03:39:01.807020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:38.228 [2024-11-05 03:39:01.807030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:38.228 [2024-11-05 03:39:01.807041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:38.228 [2024-11-05 03:39:01.807051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:38.228 [2024-11-05 03:39:01.807062] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:38.228 [2024-11-05 03:39:01.807074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:38.228 [2024-11-05 03:39:01.807085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:38.228 [2024-11-05 03:39:01.807095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:38.228 [2024-11-05 03:39:01.807106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:38.228 [2024-11-05 03:39:01.807118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:38.228 [2024-11-05 03:39:01.807130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.228 [2024-11-05 03:39:01.807141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:38.228 [2024-11-05 03:39:01.807152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.932 ms 00:28:38.228 [2024-11-05 03:39:01.807161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.228 [2024-11-05 03:39:01.807228] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:38.228 [2024-11-05 03:39:01.807252] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:42.422 [2024-11-05 03:39:05.529393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.529456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:42.422 [2024-11-05 03:39:05.529473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3728.210 ms 00:28:42.422 [2024-11-05 03:39:05.529500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.566044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.566092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:42.422 [2024-11-05 03:39:05.566108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.248 ms 00:28:42.422 [2024-11-05 03:39:05.566134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.566224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.566242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:42.422 [2024-11-05 03:39:05.566253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:42.422 [2024-11-05 03:39:05.566263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.610781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.610828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:42.422 [2024-11-05 03:39:05.610843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.513 ms 00:28:42.422 [2024-11-05 03:39:05.610857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.610904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.610916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:42.422 [2024-11-05 03:39:05.610926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:42.422 [2024-11-05 03:39:05.610937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.611422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.611437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:42.422 [2024-11-05 03:39:05.611448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.425 ms 00:28:42.422 [2024-11-05 03:39:05.611459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.611504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.611515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:42.422 [2024-11-05 03:39:05.611526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:42.422 [2024-11-05 03:39:05.611535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.631262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.631312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:42.422 [2024-11-05 03:39:05.631344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.733 ms 00:28:42.422 [2024-11-05 03:39:05.631355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.650026] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:42.422 [2024-11-05 03:39:05.650065] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:42.422 [2024-11-05 03:39:05.650080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.650092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:42.422 [2024-11-05 03:39:05.650103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.630 ms 00:28:42.422 [2024-11-05 03:39:05.650112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.670687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.670731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:42.422 [2024-11-05 03:39:05.670746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.564 ms 00:28:42.422 [2024-11-05 03:39:05.670757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.688414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.688579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:42.422 [2024-11-05 03:39:05.688600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.638 ms 00:28:42.422 [2024-11-05 03:39:05.688612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.706341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.706375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:42.422 [2024-11-05 03:39:05.706387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.714 ms 00:28:42.422 [2024-11-05 03:39:05.706397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.707190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.707215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:42.422 [2024-11-05 03:39:05.707226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.694 ms 00:28:42.422 [2024-11-05 03:39:05.707236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.809442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.809688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:42.422 [2024-11-05 03:39:05.809731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 102.348 ms 00:28:42.422 [2024-11-05 03:39:05.809743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.820711] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:42.422 [2024-11-05 03:39:05.821565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.821591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:42.422 [2024-11-05 03:39:05.821605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.754 ms 00:28:42.422 [2024-11-05 03:39:05.821616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.821723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.821739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:42.422 [2024-11-05 03:39:05.821750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:42.422 [2024-11-05 03:39:05.821760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.422 [2024-11-05 03:39:05.821837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.422 [2024-11-05 03:39:05.821852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:42.422 [2024-11-05 03:39:05.821864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:42.422 [2024-11-05 03:39:05.821874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.423 [2024-11-05 03:39:05.821898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.423 [2024-11-05 03:39:05.821909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:42.423 [2024-11-05 03:39:05.821919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:42.423 [2024-11-05 03:39:05.821933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.423 [2024-11-05 03:39:05.821970] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:42.423 [2024-11-05 03:39:05.821982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.423 [2024-11-05 03:39:05.821993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:42.423 [2024-11-05 03:39:05.822003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:42.423 [2024-11-05 03:39:05.822013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.423 [2024-11-05 03:39:05.858280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.423 [2024-11-05 03:39:05.858330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:42.423 [2024-11-05 03:39:05.858344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.304 ms 00:28:42.423 [2024-11-05 03:39:05.858356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.423 [2024-11-05 03:39:05.858436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.423 [2024-11-05 03:39:05.858449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:42.423 [2024-11-05 03:39:05.858460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:42.423 [2024-11-05 03:39:05.858470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.423 [2024-11-05 03:39:05.859576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4094.829 ms, result 0 00:28:42.423 [2024-11-05 03:39:05.874632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.423 [2024-11-05 03:39:05.890617] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:42.423 [2024-11-05 03:39:05.899825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:42.991 03:39:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:42.991 03:39:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:42.991 03:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:42.991 03:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:42.991 03:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:42.992 [2024-11-05 03:39:06.543176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.992 [2024-11-05 03:39:06.543381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:42.992 [2024-11-05 03:39:06.543409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:42.992 [2024-11-05 03:39:06.543427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.992 [2024-11-05 03:39:06.543470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.992 [2024-11-05 03:39:06.543482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:42.992 [2024-11-05 03:39:06.543492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:42.992 [2024-11-05 03:39:06.543502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.992 [2024-11-05 03:39:06.543523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.992 [2024-11-05 03:39:06.543534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:42.992 [2024-11-05 03:39:06.543545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:42.992 [2024-11-05 03:39:06.543555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.992 [2024-11-05 03:39:06.543622] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.437 ms, result 0 00:28:42.992 true 00:28:42.992 03:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:43.251 { 00:28:43.251 "name": "ftl", 00:28:43.251 "properties": [ 00:28:43.251 { 00:28:43.251 "name": "superblock_version", 00:28:43.251 "value": 5, 00:28:43.251 "read-only": true 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "name": "base_device", 00:28:43.251 "bands": [ 00:28:43.251 { 00:28:43.251 "id": 0, 00:28:43.251 "state": "CLOSED", 00:28:43.251 "validity": 1.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 1, 00:28:43.251 "state": "CLOSED", 00:28:43.251 "validity": 1.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 2, 00:28:43.251 "state": "CLOSED", 00:28:43.251 "validity": 0.007843137254901933 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 3, 00:28:43.251 "state": "FREE", 00:28:43.251 "validity": 0.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 4, 00:28:43.251 "state": "FREE", 00:28:43.251 "validity": 0.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 5, 00:28:43.251 "state": "FREE", 00:28:43.251 "validity": 0.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 6, 00:28:43.251 "state": "FREE", 00:28:43.251 "validity": 0.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 7, 00:28:43.251 "state": "FREE", 00:28:43.251 "validity": 0.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 8, 00:28:43.251 "state": "FREE", 00:28:43.251 "validity": 0.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 9, 00:28:43.251 "state": "FREE", 00:28:43.251 "validity": 0.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 10, 00:28:43.251 "state": "FREE", 00:28:43.251 "validity": 0.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 11, 00:28:43.251 "state": "FREE", 00:28:43.251 "validity": 0.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 12, 00:28:43.251 "state": "FREE", 00:28:43.251 "validity": 0.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 13, 00:28:43.251 "state": "FREE", 00:28:43.251 "validity": 0.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 14, 00:28:43.251 "state": "FREE", 00:28:43.251 "validity": 0.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 15, 00:28:43.251 "state": "FREE", 00:28:43.251 "validity": 0.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 16, 00:28:43.251 "state": "FREE", 00:28:43.251 "validity": 0.0 00:28:43.251 }, 00:28:43.251 { 00:28:43.251 "id": 17, 00:28:43.252 "state": "FREE", 00:28:43.252 "validity": 0.0 00:28:43.252 } 00:28:43.252 ], 00:28:43.252 "read-only": true 00:28:43.252 }, 00:28:43.252 { 00:28:43.252 "name": "cache_device", 00:28:43.252 "type": "bdev", 00:28:43.252 "chunks": [ 00:28:43.252 { 00:28:43.252 "id": 0, 00:28:43.252 "state": "INACTIVE", 00:28:43.252 "utilization": 0.0 00:28:43.252 }, 00:28:43.252 { 00:28:43.252 "id": 1, 00:28:43.252 "state": "OPEN", 00:28:43.252 "utilization": 0.0 00:28:43.252 }, 00:28:43.252 { 00:28:43.252 "id": 2, 00:28:43.252 "state": "OPEN", 00:28:43.252 "utilization": 0.0 00:28:43.252 }, 00:28:43.252 { 00:28:43.252 "id": 3, 00:28:43.252 "state": "FREE", 00:28:43.252 "utilization": 0.0 00:28:43.252 }, 00:28:43.252 { 00:28:43.252 "id": 4, 00:28:43.252 "state": "FREE", 00:28:43.252 "utilization": 0.0 00:28:43.252 } 00:28:43.252 ], 00:28:43.252 "read-only": true 00:28:43.252 }, 00:28:43.252 { 00:28:43.252 "name": "verbose_mode", 00:28:43.252 "value": true, 00:28:43.252 "unit": "", 00:28:43.252 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:43.252 }, 00:28:43.252 { 00:28:43.252 "name": "prep_upgrade_on_shutdown", 00:28:43.252 "value": false, 00:28:43.252 "unit": "", 00:28:43.252 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:43.252 } 00:28:43.252 ] 00:28:43.252 } 00:28:43.252 03:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:43.252 03:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:43.252 03:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:43.511 03:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:43.511 03:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:43.511 03:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:43.511 03:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:43.511 03:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:43.770 03:39:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:43.771 03:39:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:43.771 03:39:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:43.771 03:39:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:43.771 03:39:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:43.771 03:39:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:43.771 Validate MD5 checksum, iteration 1 00:28:43.771 03:39:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:43.771 03:39:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:43.771 03:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:43.771 03:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:43.771 03:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:43.771 03:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:43.771 03:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:43.771 [2024-11-05 03:39:07.292686] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:28:43.771 [2024-11-05 03:39:07.292799] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81445 ] 00:28:44.033 [2024-11-05 03:39:07.471076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.033 [2024-11-05 03:39:07.579586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.938  [2024-11-05T03:39:09.778Z] Copying: 727/1024 [MB] (727 MBps) [2024-11-05T03:39:11.315Z] Copying: 1024/1024 [MB] (average 709 MBps) 00:28:47.731 00:28:47.731 03:39:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:47.731 03:39:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:49.640 03:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:49.640 03:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1c9f1d51ea9dbe1cdd8bb59689235ec3 00:28:49.640 Validate MD5 checksum, iteration 2 00:28:49.640 03:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1c9f1d51ea9dbe1cdd8bb59689235ec3 != \1\c\9\f\1\d\5\1\e\a\9\d\b\e\1\c\d\d\8\b\b\5\9\6\8\9\2\3\5\e\c\3 ]] 00:28:49.640 03:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:49.640 03:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:49.640 03:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:49.640 03:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:49.640 03:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:49.640 03:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:49.640 03:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:49.640 03:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:49.640 03:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:49.640 [2024-11-05 03:39:13.059071] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:28:49.640 [2024-11-05 03:39:13.059365] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81512 ] 00:28:49.899 [2024-11-05 03:39:13.241664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.899 [2024-11-05 03:39:13.352076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.805  [2024-11-05T03:39:15.647Z] Copying: 700/1024 [MB] (700 MBps) [2024-11-05T03:39:17.025Z] Copying: 1024/1024 [MB] (average 670 MBps) 00:28:53.441 00:28:53.441 03:39:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:53.441 03:39:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=442965615c582959c96eea1128c97096 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 442965615c582959c96eea1128c97096 != \4\4\2\9\6\5\6\1\5\c\5\8\2\9\5\9\c\9\6\e\e\a\1\1\2\8\c\9\7\0\9\6 ]] 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81359 ]] 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81359 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81571 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:55.347 03:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81571 00:28:55.348 03:39:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81571 ']' 00:28:55.348 03:39:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.348 03:39:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:55.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.348 03:39:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.348 03:39:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:55.348 03:39:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:55.348 [2024-11-05 03:39:18.618681] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:28:55.348 [2024-11-05 03:39:18.618964] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81571 ] 00:28:55.348 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 81359 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:55.348 [2024-11-05 03:39:18.801757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.348 [2024-11-05 03:39:18.916399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.284 [2024-11-05 03:39:19.855237] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:56.284 [2024-11-05 03:39:19.855322] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:56.543 [2024-11-05 03:39:20.002043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.543 [2024-11-05 03:39:20.002256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:56.543 [2024-11-05 03:39:20.002281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:56.543 [2024-11-05 03:39:20.002309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.543 [2024-11-05 03:39:20.002375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.543 [2024-11-05 03:39:20.002389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:56.543 [2024-11-05 03:39:20.002399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:28:56.543 [2024-11-05 03:39:20.002410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.544 [2024-11-05 03:39:20.002440] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:56.544 [2024-11-05 03:39:20.003553] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:56.544 [2024-11-05 03:39:20.003578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.544 [2024-11-05 03:39:20.003589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:56.544 [2024-11-05 03:39:20.003602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.151 ms 00:28:56.544 [2024-11-05 03:39:20.003612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.544 [2024-11-05 03:39:20.003984] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:56.544 [2024-11-05 03:39:20.027404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.544 [2024-11-05 03:39:20.027450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:56.544 [2024-11-05 03:39:20.027466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.455 ms 00:28:56.544 [2024-11-05 03:39:20.027478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.544 [2024-11-05 03:39:20.041995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.544 [2024-11-05 03:39:20.042054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:56.544 [2024-11-05 03:39:20.042073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:56.544 [2024-11-05 03:39:20.042084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.544 [2024-11-05 03:39:20.042602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.544 [2024-11-05 03:39:20.042618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:56.544 [2024-11-05 03:39:20.042630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.420 ms 00:28:56.544 [2024-11-05 03:39:20.042640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.544 [2024-11-05 03:39:20.042700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.544 [2024-11-05 03:39:20.042726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:56.544 [2024-11-05 03:39:20.042738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:28:56.544 [2024-11-05 03:39:20.042748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.544 [2024-11-05 03:39:20.042779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.544 [2024-11-05 03:39:20.042790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:56.544 [2024-11-05 03:39:20.042801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:56.544 [2024-11-05 03:39:20.042811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.544 [2024-11-05 03:39:20.042837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:56.544 [2024-11-05 03:39:20.047223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.544 [2024-11-05 03:39:20.047253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:56.544 [2024-11-05 03:39:20.047266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.399 ms 00:28:56.544 [2024-11-05 03:39:20.047277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.544 [2024-11-05 03:39:20.047317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.544 [2024-11-05 03:39:20.047328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:56.544 [2024-11-05 03:39:20.047339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:56.544 [2024-11-05 03:39:20.047349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.544 [2024-11-05 03:39:20.047391] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:56.544 [2024-11-05 03:39:20.047414] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:56.544 [2024-11-05 03:39:20.047450] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:56.544 [2024-11-05 03:39:20.047470] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:56.544 [2024-11-05 03:39:20.047558] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:56.544 [2024-11-05 03:39:20.047571] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:56.544 [2024-11-05 03:39:20.047584] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:56.544 [2024-11-05 03:39:20.047597] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:56.544 [2024-11-05 03:39:20.047609] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:56.544 [2024-11-05 03:39:20.047620] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:56.544 [2024-11-05 03:39:20.047630] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:56.544 [2024-11-05 03:39:20.047640] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:56.544 [2024-11-05 03:39:20.047650] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:56.544 [2024-11-05 03:39:20.047660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.544 [2024-11-05 03:39:20.047674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:56.544 [2024-11-05 03:39:20.047684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.272 ms 00:28:56.544 [2024-11-05 03:39:20.047695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.544 [2024-11-05 03:39:20.047768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.544 [2024-11-05 03:39:20.047779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:56.544 [2024-11-05 03:39:20.047789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:28:56.544 [2024-11-05 03:39:20.047798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.544 [2024-11-05 03:39:20.047889] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:56.544 [2024-11-05 03:39:20.047901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:56.544 [2024-11-05 03:39:20.047915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:56.544 [2024-11-05 03:39:20.047925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:56.544 [2024-11-05 03:39:20.047939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:56.544 [2024-11-05 03:39:20.047948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:56.544 [2024-11-05 03:39:20.047958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:56.544 [2024-11-05 03:39:20.047968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:56.544 [2024-11-05 03:39:20.047978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:56.544 [2024-11-05 03:39:20.047987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:56.544 [2024-11-05 03:39:20.047997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:56.544 [2024-11-05 03:39:20.048007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:56.544 [2024-11-05 03:39:20.048016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:56.544 [2024-11-05 03:39:20.048026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:56.544 [2024-11-05 03:39:20.048035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:56.544 [2024-11-05 03:39:20.048044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:56.544 [2024-11-05 03:39:20.048053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:56.544 [2024-11-05 03:39:20.048063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:56.544 [2024-11-05 03:39:20.048071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:56.544 [2024-11-05 03:39:20.048081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:56.544 [2024-11-05 03:39:20.048090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:56.544 [2024-11-05 03:39:20.048099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:56.544 [2024-11-05 03:39:20.048108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:56.544 [2024-11-05 03:39:20.048129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:56.544 [2024-11-05 03:39:20.048138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:56.544 [2024-11-05 03:39:20.048147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:56.544 [2024-11-05 03:39:20.048157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:56.544 [2024-11-05 03:39:20.048167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:56.544 [2024-11-05 03:39:20.048176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:56.544 [2024-11-05 03:39:20.048186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:56.544 [2024-11-05 03:39:20.048196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:56.544 [2024-11-05 03:39:20.048205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:56.544 [2024-11-05 03:39:20.048214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:56.544 [2024-11-05 03:39:20.048223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:56.544 [2024-11-05 03:39:20.048233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:56.544 [2024-11-05 03:39:20.048242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:56.544 [2024-11-05 03:39:20.048252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:56.544 [2024-11-05 03:39:20.048262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:56.544 [2024-11-05 03:39:20.048271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:56.544 [2024-11-05 03:39:20.048280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:56.544 [2024-11-05 03:39:20.048308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:56.544 [2024-11-05 03:39:20.048318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:56.544 [2024-11-05 03:39:20.048328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:56.544 [2024-11-05 03:39:20.048338] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:56.544 [2024-11-05 03:39:20.048349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:56.544 [2024-11-05 03:39:20.048359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:56.544 [2024-11-05 03:39:20.048369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:56.544 [2024-11-05 03:39:20.048379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:56.544 [2024-11-05 03:39:20.048389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:56.545 [2024-11-05 03:39:20.048398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:56.545 [2024-11-05 03:39:20.048407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:56.545 [2024-11-05 03:39:20.048417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:56.545 [2024-11-05 03:39:20.048426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:56.545 [2024-11-05 03:39:20.048438] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:56.545 [2024-11-05 03:39:20.048450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:56.545 [2024-11-05 03:39:20.048462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:56.545 [2024-11-05 03:39:20.048472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:56.545 [2024-11-05 03:39:20.048490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:56.545 [2024-11-05 03:39:20.048500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:56.545 [2024-11-05 03:39:20.048511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:56.545 [2024-11-05 03:39:20.048521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:56.545 [2024-11-05 03:39:20.048532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:56.545 [2024-11-05 03:39:20.048542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:56.545 [2024-11-05 03:39:20.048552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:56.545 [2024-11-05 03:39:20.048562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:56.545 [2024-11-05 03:39:20.048572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:56.545 [2024-11-05 03:39:20.048582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:56.545 [2024-11-05 03:39:20.048593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:56.545 [2024-11-05 03:39:20.048604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:56.545 [2024-11-05 03:39:20.048615] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:56.545 [2024-11-05 03:39:20.048626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:56.545 [2024-11-05 03:39:20.048637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:56.545 [2024-11-05 03:39:20.048648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:56.545 [2024-11-05 03:39:20.048659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:56.545 [2024-11-05 03:39:20.048669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:56.545 [2024-11-05 03:39:20.048681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.545 [2024-11-05 03:39:20.048694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:56.545 [2024-11-05 03:39:20.048705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.847 ms 00:28:56.545 [2024-11-05 03:39:20.048714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.545 [2024-11-05 03:39:20.086305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.545 [2024-11-05 03:39:20.086351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:56.545 [2024-11-05 03:39:20.086367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.595 ms 00:28:56.545 [2024-11-05 03:39:20.086378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.545 [2024-11-05 03:39:20.086434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.545 [2024-11-05 03:39:20.086447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:56.545 [2024-11-05 03:39:20.086457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:56.545 [2024-11-05 03:39:20.086468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.805 [2024-11-05 03:39:20.133461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.805 [2024-11-05 03:39:20.133667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:56.805 [2024-11-05 03:39:20.133691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.993 ms 00:28:56.805 [2024-11-05 03:39:20.133702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.805 [2024-11-05 03:39:20.133759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.805 [2024-11-05 03:39:20.133771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:56.805 [2024-11-05 03:39:20.133783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:56.805 [2024-11-05 03:39:20.133793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.805 [2024-11-05 03:39:20.133948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.805 [2024-11-05 03:39:20.133963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:56.805 [2024-11-05 03:39:20.133974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:28:56.805 [2024-11-05 03:39:20.133985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.805 [2024-11-05 03:39:20.134026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.805 [2024-11-05 03:39:20.134037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:56.805 [2024-11-05 03:39:20.134048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:56.805 [2024-11-05 03:39:20.134058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.805 [2024-11-05 03:39:20.154582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.805 [2024-11-05 03:39:20.154619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:56.805 [2024-11-05 03:39:20.154633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.531 ms 00:28:56.805 [2024-11-05 03:39:20.154644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.805 [2024-11-05 03:39:20.154804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.805 [2024-11-05 03:39:20.154820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:56.805 [2024-11-05 03:39:20.154831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:56.805 [2024-11-05 03:39:20.154842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.805 [2024-11-05 03:39:20.189137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.805 [2024-11-05 03:39:20.189175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:56.805 [2024-11-05 03:39:20.189190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.327 ms 00:28:56.805 [2024-11-05 03:39:20.189201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.805 [2024-11-05 03:39:20.203983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.805 [2024-11-05 03:39:20.204023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:56.805 [2024-11-05 03:39:20.204047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.648 ms 00:28:56.805 [2024-11-05 03:39:20.204058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.805 [2024-11-05 03:39:20.289456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.805 [2024-11-05 03:39:20.289521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:56.805 [2024-11-05 03:39:20.289558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 85.467 ms 00:28:56.805 [2024-11-05 03:39:20.289586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.805 [2024-11-05 03:39:20.289777] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:56.805 [2024-11-05 03:39:20.289919] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:56.805 [2024-11-05 03:39:20.290056] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:56.805 [2024-11-05 03:39:20.290191] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:56.805 [2024-11-05 03:39:20.290205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.805 [2024-11-05 03:39:20.290216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:56.805 [2024-11-05 03:39:20.290228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.548 ms 00:28:56.805 [2024-11-05 03:39:20.290239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.805 [2024-11-05 03:39:20.290373] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:56.805 [2024-11-05 03:39:20.290401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.805 [2024-11-05 03:39:20.290416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:56.805 [2024-11-05 03:39:20.290428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:56.805 [2024-11-05 03:39:20.290438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.805 [2024-11-05 03:39:20.313002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.805 [2024-11-05 03:39:20.313189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:56.805 [2024-11-05 03:39:20.313213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.575 ms 00:28:56.805 [2024-11-05 03:39:20.313225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.805 [2024-11-05 03:39:20.326854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.805 [2024-11-05 03:39:20.326987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:56.805 [2024-11-05 03:39:20.327008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:56.805 [2024-11-05 03:39:20.327019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.805 [2024-11-05 03:39:20.327127] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:56.805 [2024-11-05 03:39:20.327342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.805 [2024-11-05 03:39:20.327359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:56.805 [2024-11-05 03:39:20.327371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.217 ms 00:28:56.805 [2024-11-05 03:39:20.327381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.375 [2024-11-05 03:39:20.945272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.375 [2024-11-05 03:39:20.945355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:57.375 [2024-11-05 03:39:20.945375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 617.662 ms 00:28:57.375 [2024-11-05 03:39:20.945386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.375 [2024-11-05 03:39:20.951333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.375 [2024-11-05 03:39:20.951480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:57.375 [2024-11-05 03:39:20.951565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.366 ms 00:28:57.375 [2024-11-05 03:39:20.951603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.375 [2024-11-05 03:39:20.952191] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:57.375 [2024-11-05 03:39:20.952278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.375 [2024-11-05 03:39:20.952394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:57.375 [2024-11-05 03:39:20.952514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.608 ms 00:28:57.375 [2024-11-05 03:39:20.952551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.375 [2024-11-05 03:39:20.952615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.375 [2024-11-05 03:39:20.952653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:57.375 [2024-11-05 03:39:20.952685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:57.375 [2024-11-05 03:39:20.952904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.375 [2024-11-05 03:39:20.952985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 626.871 ms, result 0 00:28:57.375 [2024-11-05 03:39:20.953066] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:57.375 [2024-11-05 03:39:20.953225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.375 [2024-11-05 03:39:20.953236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:57.375 [2024-11-05 03:39:20.953246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.162 ms 00:28:57.375 [2024-11-05 03:39:20.953256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.315 [2024-11-05 03:39:21.557817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.315 [2024-11-05 03:39:21.558047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:58.315 [2024-11-05 03:39:21.558077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 604.391 ms 00:28:58.315 [2024-11-05 03:39:21.558089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.315 [2024-11-05 03:39:21.563963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.315 [2024-11-05 03:39:21.564005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:58.315 [2024-11-05 03:39:21.564020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.329 ms 00:28:58.315 [2024-11-05 03:39:21.564031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.315 [2024-11-05 03:39:21.564552] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:58.315 [2024-11-05 03:39:21.564577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.315 [2024-11-05 03:39:21.564588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:58.315 [2024-11-05 03:39:21.564599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.517 ms 00:28:58.315 [2024-11-05 03:39:21.564609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.315 [2024-11-05 03:39:21.564641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.315 [2024-11-05 03:39:21.564653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:58.315 [2024-11-05 03:39:21.564663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:58.315 [2024-11-05 03:39:21.564672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.315 [2024-11-05 03:39:21.564711] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 612.633 ms, result 0 00:28:58.315 [2024-11-05 03:39:21.564756] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:58.315 [2024-11-05 03:39:21.564769] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:58.315 [2024-11-05 03:39:21.564782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.315 [2024-11-05 03:39:21.564792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:58.315 [2024-11-05 03:39:21.564803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1239.686 ms 00:28:58.315 [2024-11-05 03:39:21.564814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.316 [2024-11-05 03:39:21.564844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.316 [2024-11-05 03:39:21.564856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:58.316 [2024-11-05 03:39:21.564872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:58.316 [2024-11-05 03:39:21.564882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.316 [2024-11-05 03:39:21.576095] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:58.316 [2024-11-05 03:39:21.576379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.316 [2024-11-05 03:39:21.576429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:58.316 [2024-11-05 03:39:21.576517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.497 ms 00:28:58.316 [2024-11-05 03:39:21.576553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.316 [2024-11-05 03:39:21.577215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.316 [2024-11-05 03:39:21.577357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:58.316 [2024-11-05 03:39:21.577455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.512 ms 00:28:58.316 [2024-11-05 03:39:21.577492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.316 [2024-11-05 03:39:21.579604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.316 [2024-11-05 03:39:21.579735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:58.316 [2024-11-05 03:39:21.579813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.022 ms 00:28:58.316 [2024-11-05 03:39:21.579849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.316 [2024-11-05 03:39:21.579925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.316 [2024-11-05 03:39:21.579960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:58.316 [2024-11-05 03:39:21.579991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:58.316 [2024-11-05 03:39:21.580082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.316 [2024-11-05 03:39:21.580216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.316 [2024-11-05 03:39:21.580253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:58.316 [2024-11-05 03:39:21.580402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:58.316 [2024-11-05 03:39:21.580515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.316 [2024-11-05 03:39:21.580570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.316 [2024-11-05 03:39:21.580602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:58.316 [2024-11-05 03:39:21.580633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:58.316 [2024-11-05 03:39:21.580662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.316 [2024-11-05 03:39:21.580777] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:58.316 [2024-11-05 03:39:21.580820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.316 [2024-11-05 03:39:21.580850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:58.316 [2024-11-05 03:39:21.580863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:28:58.316 [2024-11-05 03:39:21.580874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.316 [2024-11-05 03:39:21.580946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.316 [2024-11-05 03:39:21.580959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:58.316 [2024-11-05 03:39:21.580970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:28:58.316 [2024-11-05 03:39:21.580981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.316 [2024-11-05 03:39:21.581907] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1582.000 ms, result 0 00:28:58.316 [2024-11-05 03:39:21.596997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.316 [2024-11-05 03:39:21.612982] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:58.316 [2024-11-05 03:39:21.622313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:58.316 Validate MD5 checksum, iteration 1 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:58.316 03:39:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:58.316 [2024-11-05 03:39:21.759153] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:28:58.316 [2024-11-05 03:39:21.759434] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81612 ] 00:28:58.576 [2024-11-05 03:39:21.940942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.576 [2024-11-05 03:39:22.049500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.483  [2024-11-05T03:39:24.327Z] Copying: 703/1024 [MB] (703 MBps) [2024-11-05T03:39:28.524Z] Copying: 1024/1024 [MB] (average 695 MBps) 00:29:04.940 00:29:04.940 03:39:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:04.940 03:39:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:06.318 03:39:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:06.318 Validate MD5 checksum, iteration 2 00:29:06.318 03:39:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1c9f1d51ea9dbe1cdd8bb59689235ec3 00:29:06.318 03:39:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1c9f1d51ea9dbe1cdd8bb59689235ec3 != \1\c\9\f\1\d\5\1\e\a\9\d\b\e\1\c\d\d\8\b\b\5\9\6\8\9\2\3\5\e\c\3 ]] 00:29:06.318 03:39:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:06.318 03:39:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:06.318 03:39:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:06.318 03:39:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:06.318 03:39:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:06.318 03:39:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:06.318 03:39:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:06.318 03:39:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:06.318 03:39:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:06.318 [2024-11-05 03:39:29.820411] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:29:06.318 [2024-11-05 03:39:29.820700] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81698 ] 00:29:06.577 [2024-11-05 03:39:30.003348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.577 [2024-11-05 03:39:30.121238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.480  [2024-11-05T03:39:32.323Z] Copying: 657/1024 [MB] (657 MBps) [2024-11-05T03:39:34.895Z] Copying: 1024/1024 [MB] (average 666 MBps) 00:29:11.311 00:29:11.311 03:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:11.311 03:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:12.689 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:12.689 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=442965615c582959c96eea1128c97096 00:29:12.689 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 442965615c582959c96eea1128c97096 != \4\4\2\9\6\5\6\1\5\c\5\8\2\9\5\9\c\9\6\e\e\a\1\1\2\8\c\9\7\0\9\6 ]] 00:29:12.689 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:12.689 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:12.689 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:12.689 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:12.689 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:12.689 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:12.947 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81571 ]] 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81571 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81571 ']' 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81571 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81571 00:29:12.948 killing process with pid 81571 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81571' 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81571 00:29:12.948 03:39:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81571 00:29:13.885 [2024-11-05 03:39:37.447740] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:13.885 [2024-11-05 03:39:37.467769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.885 [2024-11-05 03:39:37.467823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:13.885 [2024-11-05 03:39:37.467841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:13.885 [2024-11-05 03:39:37.467852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.885 [2024-11-05 03:39:37.467877] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:14.145 [2024-11-05 03:39:37.471966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.145 [2024-11-05 03:39:37.471996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:14.145 [2024-11-05 03:39:37.472010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.079 ms 00:29:14.145 [2024-11-05 03:39:37.472026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.145 [2024-11-05 03:39:37.472243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.145 [2024-11-05 03:39:37.472257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:14.145 [2024-11-05 03:39:37.472268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:29:14.145 [2024-11-05 03:39:37.472278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.145 [2024-11-05 03:39:37.473480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.145 [2024-11-05 03:39:37.473515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:14.145 [2024-11-05 03:39:37.473528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.171 ms 00:29:14.145 [2024-11-05 03:39:37.473539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.145 [2024-11-05 03:39:37.474499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.145 [2024-11-05 03:39:37.474682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:14.145 [2024-11-05 03:39:37.474704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.919 ms 00:29:14.145 [2024-11-05 03:39:37.474723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.145 [2024-11-05 03:39:37.490447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.145 [2024-11-05 03:39:37.490500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:14.145 [2024-11-05 03:39:37.490515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.680 ms 00:29:14.145 [2024-11-05 03:39:37.490533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.145 [2024-11-05 03:39:37.498774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.145 [2024-11-05 03:39:37.498815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:14.145 [2024-11-05 03:39:37.498830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.214 ms 00:29:14.145 [2024-11-05 03:39:37.498840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.145 [2024-11-05 03:39:37.498946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.145 [2024-11-05 03:39:37.498960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:14.145 [2024-11-05 03:39:37.498972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:29:14.145 [2024-11-05 03:39:37.498982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.145 [2024-11-05 03:39:37.514040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.145 [2024-11-05 03:39:37.514079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:14.145 [2024-11-05 03:39:37.514094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.056 ms 00:29:14.145 [2024-11-05 03:39:37.514103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.145 [2024-11-05 03:39:37.528842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.145 [2024-11-05 03:39:37.528991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:14.145 [2024-11-05 03:39:37.529029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.724 ms 00:29:14.145 [2024-11-05 03:39:37.529039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.145 [2024-11-05 03:39:37.543418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.145 [2024-11-05 03:39:37.543562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:14.145 [2024-11-05 03:39:37.543584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.362 ms 00:29:14.145 [2024-11-05 03:39:37.543595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.146 [2024-11-05 03:39:37.558039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.146 [2024-11-05 03:39:37.558078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:14.146 [2024-11-05 03:39:37.558091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.336 ms 00:29:14.146 [2024-11-05 03:39:37.558102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.146 [2024-11-05 03:39:37.558138] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:14.146 [2024-11-05 03:39:37.558156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:14.146 [2024-11-05 03:39:37.558168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:14.146 [2024-11-05 03:39:37.558180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:14.146 [2024-11-05 03:39:37.558192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:14.146 [2024-11-05 03:39:37.558374] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:14.146 [2024-11-05 03:39:37.558384] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a97cb0e9-943d-480c-a4e4-340d83fc0679 00:29:14.146 [2024-11-05 03:39:37.558396] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:14.146 [2024-11-05 03:39:37.558406] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:14.146 [2024-11-05 03:39:37.558416] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:14.146 [2024-11-05 03:39:37.558446] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:14.146 [2024-11-05 03:39:37.558456] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:14.146 [2024-11-05 03:39:37.558467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:14.146 [2024-11-05 03:39:37.558477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:14.146 [2024-11-05 03:39:37.558487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:14.146 [2024-11-05 03:39:37.558496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:14.146 [2024-11-05 03:39:37.558510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.146 [2024-11-05 03:39:37.558527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:14.146 [2024-11-05 03:39:37.558539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.374 ms 00:29:14.146 [2024-11-05 03:39:37.558549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.146 [2024-11-05 03:39:37.578985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.146 [2024-11-05 03:39:37.579028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:14.146 [2024-11-05 03:39:37.579042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.435 ms 00:29:14.146 [2024-11-05 03:39:37.579054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.146 [2024-11-05 03:39:37.579582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.146 [2024-11-05 03:39:37.579595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:14.146 [2024-11-05 03:39:37.579605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.496 ms 00:29:14.146 [2024-11-05 03:39:37.579616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.146 [2024-11-05 03:39:37.647166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:14.146 [2024-11-05 03:39:37.647235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:14.146 [2024-11-05 03:39:37.647260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:14.146 [2024-11-05 03:39:37.647271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.146 [2024-11-05 03:39:37.647369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:14.146 [2024-11-05 03:39:37.647381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:14.146 [2024-11-05 03:39:37.647392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:14.146 [2024-11-05 03:39:37.647402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.146 [2024-11-05 03:39:37.647526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:14.146 [2024-11-05 03:39:37.647540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:14.146 [2024-11-05 03:39:37.647550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:14.146 [2024-11-05 03:39:37.647560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.146 [2024-11-05 03:39:37.647579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:14.146 [2024-11-05 03:39:37.647594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:14.146 [2024-11-05 03:39:37.647605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:14.146 [2024-11-05 03:39:37.647615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.406 [2024-11-05 03:39:37.772206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:14.406 [2024-11-05 03:39:37.772270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:14.406 [2024-11-05 03:39:37.772296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:14.406 [2024-11-05 03:39:37.772307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.406 [2024-11-05 03:39:37.871517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:14.406 [2024-11-05 03:39:37.871584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:14.406 [2024-11-05 03:39:37.871599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:14.406 [2024-11-05 03:39:37.871627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.406 [2024-11-05 03:39:37.871740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:14.406 [2024-11-05 03:39:37.871752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:14.406 [2024-11-05 03:39:37.871763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:14.406 [2024-11-05 03:39:37.871773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.406 [2024-11-05 03:39:37.871819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:14.406 [2024-11-05 03:39:37.871831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:14.406 [2024-11-05 03:39:37.871846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:14.406 [2024-11-05 03:39:37.871867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.406 [2024-11-05 03:39:37.871987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:14.406 [2024-11-05 03:39:37.872001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:14.406 [2024-11-05 03:39:37.872011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:14.406 [2024-11-05 03:39:37.872022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.406 [2024-11-05 03:39:37.872061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:14.406 [2024-11-05 03:39:37.872073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:14.406 [2024-11-05 03:39:37.872084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:14.406 [2024-11-05 03:39:37.872098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.406 [2024-11-05 03:39:37.872135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:14.406 [2024-11-05 03:39:37.872147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:14.406 [2024-11-05 03:39:37.872158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:14.406 [2024-11-05 03:39:37.872167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.406 [2024-11-05 03:39:37.872209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:14.406 [2024-11-05 03:39:37.872221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:14.406 [2024-11-05 03:39:37.872235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:14.406 [2024-11-05 03:39:37.872244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.406 [2024-11-05 03:39:37.872401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 405.257 ms, result 0 00:29:15.789 03:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:15.789 03:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:15.789 03:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:15.789 03:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:15.789 03:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:15.789 03:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:15.789 Remove shared memory files 00:29:15.789 03:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:15.789 03:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:15.789 03:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:15.790 03:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:15.790 03:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81359 00:29:15.790 03:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:15.790 03:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:15.790 ************************************ 00:29:15.790 END TEST ftl_upgrade_shutdown 00:29:15.790 ************************************ 00:29:15.790 00:29:15.790 real 1m29.977s 00:29:15.790 user 2m3.145s 00:29:15.790 sys 0m21.974s 00:29:15.790 03:39:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:15.790 03:39:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:15.790 03:39:39 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:29:15.790 03:39:39 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:29:15.790 03:39:39 ftl -- ftl/ftl.sh@14 -- # killprocess 74053 00:29:15.790 03:39:39 ftl -- common/autotest_common.sh@952 -- # '[' -z 74053 ']' 00:29:15.790 03:39:39 ftl -- common/autotest_common.sh@956 -- # kill -0 74053 00:29:15.790 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74053) - No such process 00:29:15.790 Process with pid 74053 is not found 00:29:15.790 03:39:39 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 74053 is not found' 00:29:15.790 03:39:39 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:29:15.790 03:39:39 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81830 00:29:15.790 03:39:39 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:15.790 03:39:39 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81830 00:29:15.790 03:39:39 ftl -- common/autotest_common.sh@833 -- # '[' -z 81830 ']' 00:29:15.790 03:39:39 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.790 03:39:39 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:15.790 03:39:39 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.790 03:39:39 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:15.790 03:39:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:15.790 [2024-11-05 03:39:39.326072] Starting SPDK v25.01-pre git sha1 a46541aa1 / DPDK 24.03.0 initialization... 00:29:15.790 [2024-11-05 03:39:39.326359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81830 ] 00:29:16.049 [2024-11-05 03:39:39.503880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.049 [2024-11-05 03:39:39.609281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.997 03:39:40 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:16.997 03:39:40 ftl -- common/autotest_common.sh@866 -- # return 0 00:29:16.997 03:39:40 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:17.257 nvme0n1 00:29:17.257 03:39:40 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:29:17.257 03:39:40 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:17.257 03:39:40 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:17.516 03:39:40 ftl -- ftl/common.sh@28 -- # stores=3fbb344d-2509-4c17-be3d-83a11cb378f6 00:29:17.516 03:39:40 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:29:17.516 03:39:40 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3fbb344d-2509-4c17-be3d-83a11cb378f6 00:29:17.775 03:39:41 ftl -- ftl/ftl.sh@23 -- # killprocess 81830 00:29:17.775 03:39:41 ftl -- common/autotest_common.sh@952 -- # '[' -z 81830 ']' 00:29:17.775 03:39:41 ftl -- common/autotest_common.sh@956 -- # kill -0 81830 00:29:17.775 03:39:41 ftl -- common/autotest_common.sh@957 -- # uname 00:29:17.775 03:39:41 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:17.775 03:39:41 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81830 00:29:17.775 killing process with pid 81830 00:29:17.775 03:39:41 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:17.775 03:39:41 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:17.775 03:39:41 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81830' 00:29:17.775 03:39:41 ftl -- common/autotest_common.sh@971 -- # kill 81830 00:29:17.775 03:39:41 ftl -- common/autotest_common.sh@976 -- # wait 81830 00:29:20.309 03:39:43 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:20.567 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:20.567 Waiting for block devices as requested 00:29:20.567 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:20.826 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:20.826 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:20.826 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:26.100 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:26.100 03:39:49 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:29:26.100 Remove shared memory files 00:29:26.100 03:39:49 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:26.100 03:39:49 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:29:26.100 03:39:49 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:29:26.100 03:39:49 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:29:26.100 03:39:49 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:26.100 03:39:49 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:29:26.100 ************************************ 00:29:26.100 END TEST ftl 00:29:26.100 ************************************ 00:29:26.100 00:29:26.100 real 11m31.035s 00:29:26.100 user 14m6.280s 00:29:26.100 sys 1m31.404s 00:29:26.100 03:39:49 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:26.100 03:39:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:26.100 03:39:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:26.100 03:39:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:26.100 03:39:49 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:26.100 03:39:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:26.100 03:39:49 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:29:26.100 03:39:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:26.100 03:39:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:26.100 03:39:49 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:29:26.100 03:39:49 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:29:26.100 03:39:49 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:29:26.100 03:39:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.100 03:39:49 -- common/autotest_common.sh@10 -- # set +x 00:29:26.100 03:39:49 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:29:26.100 03:39:49 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:29:26.100 03:39:49 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:29:26.100 03:39:49 -- common/autotest_common.sh@10 -- # set +x 00:29:28.636 INFO: APP EXITING 00:29:28.636 INFO: killing all VMs 00:29:28.636 INFO: killing vhost app 00:29:28.636 INFO: EXIT DONE 00:29:28.894 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:29.462 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:29.463 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:29.463 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:29:29.463 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:29:30.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:30.319 Cleaning 00:29:30.319 Removing: /var/run/dpdk/spdk0/config 00:29:30.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:30.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:30.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:30.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:30.319 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:30.319 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:30.319 Removing: /var/run/dpdk/spdk0 00:29:30.319 Removing: /var/run/dpdk/spdk_pid57553 00:29:30.319 Removing: /var/run/dpdk/spdk_pid57806 00:29:30.319 Removing: /var/run/dpdk/spdk_pid58046 00:29:30.319 Removing: /var/run/dpdk/spdk_pid58150 00:29:30.319 Removing: /var/run/dpdk/spdk_pid58206 00:29:30.319 Removing: /var/run/dpdk/spdk_pid58345 00:29:30.319 Removing: /var/run/dpdk/spdk_pid58370 00:29:30.319 Removing: /var/run/dpdk/spdk_pid58584 00:29:30.319 Removing: /var/run/dpdk/spdk_pid58702 00:29:30.319 Removing: /var/run/dpdk/spdk_pid58815 00:29:30.319 Removing: /var/run/dpdk/spdk_pid58942 00:29:30.319 Removing: /var/run/dpdk/spdk_pid59056 00:29:30.319 Removing: /var/run/dpdk/spdk_pid59095 00:29:30.319 Removing: /var/run/dpdk/spdk_pid59132 00:29:30.319 Removing: /var/run/dpdk/spdk_pid59208 00:29:30.319 Removing: /var/run/dpdk/spdk_pid59336 00:29:30.319 Removing: /var/run/dpdk/spdk_pid59785 00:29:30.319 Removing: /var/run/dpdk/spdk_pid59867 00:29:30.319 Removing: /var/run/dpdk/spdk_pid59950 00:29:30.319 Removing: /var/run/dpdk/spdk_pid59972 00:29:30.319 Removing: /var/run/dpdk/spdk_pid60132 00:29:30.319 Removing: /var/run/dpdk/spdk_pid60148 00:29:30.319 Removing: /var/run/dpdk/spdk_pid60313 00:29:30.319 Removing: /var/run/dpdk/spdk_pid60334 00:29:30.319 Removing: /var/run/dpdk/spdk_pid60404 00:29:30.597 Removing: /var/run/dpdk/spdk_pid60427 00:29:30.597 Removing: /var/run/dpdk/spdk_pid60497 00:29:30.597 Removing: /var/run/dpdk/spdk_pid60519 00:29:30.597 Removing: /var/run/dpdk/spdk_pid60721 00:29:30.597 Removing: /var/run/dpdk/spdk_pid60752 00:29:30.597 Removing: /var/run/dpdk/spdk_pid60841 00:29:30.597 Removing: /var/run/dpdk/spdk_pid61035 00:29:30.597 Removing: /var/run/dpdk/spdk_pid61130 00:29:30.597 Removing: /var/run/dpdk/spdk_pid61178 00:29:30.597 Removing: /var/run/dpdk/spdk_pid61632 00:29:30.597 Removing: /var/run/dpdk/spdk_pid61736 00:29:30.597 Removing: /var/run/dpdk/spdk_pid61856 00:29:30.597 Removing: /var/run/dpdk/spdk_pid61909 00:29:30.597 Removing: /var/run/dpdk/spdk_pid61940 00:29:30.597 Removing: /var/run/dpdk/spdk_pid62024 00:29:30.597 Removing: /var/run/dpdk/spdk_pid62669 00:29:30.597 Removing: /var/run/dpdk/spdk_pid62716 00:29:30.597 Removing: /var/run/dpdk/spdk_pid63213 00:29:30.597 Removing: /var/run/dpdk/spdk_pid63313 00:29:30.597 Removing: /var/run/dpdk/spdk_pid63433 00:29:30.597 Removing: /var/run/dpdk/spdk_pid63491 00:29:30.597 Removing: /var/run/dpdk/spdk_pid63517 00:29:30.597 Removing: /var/run/dpdk/spdk_pid63543 00:29:30.597 Removing: /var/run/dpdk/spdk_pid65442 00:29:30.597 Removing: /var/run/dpdk/spdk_pid65590 00:29:30.597 Removing: /var/run/dpdk/spdk_pid65594 00:29:30.597 Removing: /var/run/dpdk/spdk_pid65611 00:29:30.597 Removing: /var/run/dpdk/spdk_pid65658 00:29:30.597 Removing: /var/run/dpdk/spdk_pid65662 00:29:30.597 Removing: /var/run/dpdk/spdk_pid65674 00:29:30.597 Removing: /var/run/dpdk/spdk_pid65724 00:29:30.597 Removing: /var/run/dpdk/spdk_pid65728 00:29:30.597 Removing: /var/run/dpdk/spdk_pid65740 00:29:30.597 Removing: /var/run/dpdk/spdk_pid65790 00:29:30.597 Removing: /var/run/dpdk/spdk_pid65794 00:29:30.597 Removing: /var/run/dpdk/spdk_pid65806 00:29:30.597 Removing: /var/run/dpdk/spdk_pid67211 00:29:30.597 Removing: /var/run/dpdk/spdk_pid67322 00:29:30.597 Removing: /var/run/dpdk/spdk_pid68750 00:29:30.597 Removing: /var/run/dpdk/spdk_pid70126 00:29:30.597 Removing: /var/run/dpdk/spdk_pid70242 00:29:30.597 Removing: /var/run/dpdk/spdk_pid70346 00:29:30.597 Removing: /var/run/dpdk/spdk_pid70456 00:29:30.597 Removing: /var/run/dpdk/spdk_pid70585 00:29:30.597 Removing: /var/run/dpdk/spdk_pid70665 00:29:30.597 Removing: /var/run/dpdk/spdk_pid70818 00:29:30.597 Removing: /var/run/dpdk/spdk_pid71194 00:29:30.597 Removing: /var/run/dpdk/spdk_pid71236 00:29:30.597 Removing: /var/run/dpdk/spdk_pid71693 00:29:30.597 Removing: /var/run/dpdk/spdk_pid71883 00:29:30.597 Removing: /var/run/dpdk/spdk_pid71983 00:29:30.597 Removing: /var/run/dpdk/spdk_pid72098 00:29:30.597 Removing: /var/run/dpdk/spdk_pid72153 00:29:30.597 Removing: /var/run/dpdk/spdk_pid72184 00:29:30.597 Removing: /var/run/dpdk/spdk_pid72486 00:29:30.597 Removing: /var/run/dpdk/spdk_pid72557 00:29:30.597 Removing: /var/run/dpdk/spdk_pid72649 00:29:30.597 Removing: /var/run/dpdk/spdk_pid73093 00:29:30.597 Removing: /var/run/dpdk/spdk_pid73241 00:29:30.597 Removing: /var/run/dpdk/spdk_pid74053 00:29:30.597 Removing: /var/run/dpdk/spdk_pid74202 00:29:30.597 Removing: /var/run/dpdk/spdk_pid74406 00:29:30.857 Removing: /var/run/dpdk/spdk_pid74514 00:29:30.857 Removing: /var/run/dpdk/spdk_pid74840 00:29:30.857 Removing: /var/run/dpdk/spdk_pid75103 00:29:30.857 Removing: /var/run/dpdk/spdk_pid75463 00:29:30.857 Removing: /var/run/dpdk/spdk_pid75707 00:29:30.857 Removing: /var/run/dpdk/spdk_pid75855 00:29:30.857 Removing: /var/run/dpdk/spdk_pid75924 00:29:30.857 Removing: /var/run/dpdk/spdk_pid76073 00:29:30.857 Removing: /var/run/dpdk/spdk_pid76110 00:29:30.857 Removing: /var/run/dpdk/spdk_pid76189 00:29:30.857 Removing: /var/run/dpdk/spdk_pid76406 00:29:30.857 Removing: /var/run/dpdk/spdk_pid76669 00:29:30.857 Removing: /var/run/dpdk/spdk_pid77113 00:29:30.857 Removing: /var/run/dpdk/spdk_pid77555 00:29:30.857 Removing: /var/run/dpdk/spdk_pid78018 00:29:30.857 Removing: /var/run/dpdk/spdk_pid78552 00:29:30.857 Removing: /var/run/dpdk/spdk_pid78705 00:29:30.857 Removing: /var/run/dpdk/spdk_pid78798 00:29:30.857 Removing: /var/run/dpdk/spdk_pid79400 00:29:30.857 Removing: /var/run/dpdk/spdk_pid79477 00:29:30.857 Removing: /var/run/dpdk/spdk_pid79921 00:29:30.857 Removing: /var/run/dpdk/spdk_pid80295 00:29:30.857 Removing: /var/run/dpdk/spdk_pid80792 00:29:30.857 Removing: /var/run/dpdk/spdk_pid80924 00:29:30.857 Removing: /var/run/dpdk/spdk_pid80978 00:29:30.857 Removing: /var/run/dpdk/spdk_pid81042 00:29:30.857 Removing: /var/run/dpdk/spdk_pid81098 00:29:30.857 Removing: /var/run/dpdk/spdk_pid81164 00:29:30.857 Removing: /var/run/dpdk/spdk_pid81359 00:29:30.857 Removing: /var/run/dpdk/spdk_pid81445 00:29:30.857 Removing: /var/run/dpdk/spdk_pid81512 00:29:30.857 Removing: /var/run/dpdk/spdk_pid81571 00:29:30.857 Removing: /var/run/dpdk/spdk_pid81612 00:29:30.857 Removing: /var/run/dpdk/spdk_pid81698 00:29:30.857 Removing: /var/run/dpdk/spdk_pid81830 00:29:30.857 Clean 00:29:30.857 03:39:54 -- common/autotest_common.sh@1451 -- # return 0 00:29:30.857 03:39:54 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:29:30.857 03:39:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:30.857 03:39:54 -- common/autotest_common.sh@10 -- # set +x 00:29:31.116 03:39:54 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:29:31.116 03:39:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:31.116 03:39:54 -- common/autotest_common.sh@10 -- # set +x 00:29:31.116 03:39:54 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:31.116 03:39:54 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:31.116 03:39:54 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:31.116 03:39:54 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:29:31.116 03:39:54 -- spdk/autotest.sh@394 -- # hostname 00:29:31.116 03:39:54 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:31.374 geninfo: WARNING: invalid characters removed from testname! 00:29:57.916 03:40:19 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:58.852 03:40:22 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:01.409 03:40:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:03.315 03:40:26 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:05.221 03:40:28 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:07.756 03:40:30 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:09.662 03:40:33 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:09.662 03:40:33 -- spdk/autorun.sh@1 -- $ timing_finish 00:30:09.662 03:40:33 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:30:09.662 03:40:33 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:09.662 03:40:33 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:09.662 03:40:33 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:09.662 + [[ -n 5259 ]] 00:30:09.662 + sudo kill 5259 00:30:09.671 [Pipeline] } 00:30:09.687 [Pipeline] // timeout 00:30:09.692 [Pipeline] } 00:30:09.706 [Pipeline] // stage 00:30:09.710 [Pipeline] } 00:30:09.724 [Pipeline] // catchError 00:30:09.733 [Pipeline] stage 00:30:09.736 [Pipeline] { (Stop VM) 00:30:09.748 [Pipeline] sh 00:30:10.030 + vagrant halt 00:30:13.323 ==> default: Halting domain... 00:30:19.906 [Pipeline] sh 00:30:20.188 + vagrant destroy -f 00:30:22.769 ==> default: Removing domain... 00:30:23.351 [Pipeline] sh 00:30:23.636 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:30:23.645 [Pipeline] } 00:30:23.660 [Pipeline] // stage 00:30:23.665 [Pipeline] } 00:30:23.679 [Pipeline] // dir 00:30:23.685 [Pipeline] } 00:30:23.700 [Pipeline] // wrap 00:30:23.706 [Pipeline] } 00:30:23.719 [Pipeline] // catchError 00:30:23.729 [Pipeline] stage 00:30:23.731 [Pipeline] { (Epilogue) 00:30:23.745 [Pipeline] sh 00:30:24.029 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:29.314 [Pipeline] catchError 00:30:29.316 [Pipeline] { 00:30:29.329 [Pipeline] sh 00:30:29.610 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:29.610 Artifacts sizes are good 00:30:29.618 [Pipeline] } 00:30:29.632 [Pipeline] // catchError 00:30:29.642 [Pipeline] archiveArtifacts 00:30:29.649 Archiving artifacts 00:30:29.789 [Pipeline] cleanWs 00:30:29.801 [WS-CLEANUP] Deleting project workspace... 00:30:29.801 [WS-CLEANUP] Deferred wipeout is used... 00:30:29.808 [WS-CLEANUP] done 00:30:29.810 [Pipeline] } 00:30:29.826 [Pipeline] // stage 00:30:29.830 [Pipeline] } 00:30:29.844 [Pipeline] // node 00:30:29.849 [Pipeline] End of Pipeline 00:30:29.883 Finished: SUCCESS