00:00:00.001 Started by upstream project "autotest-per-patch" build number 132396 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.105 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.191 Fetching changes from the remote Git repository 00:00:00.192 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.286 Using shallow fetch with depth 1 00:00:00.286 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.286 > git --version # timeout=10 00:00:00.359 > git --version # 'git version 2.39.2' 00:00:00.359 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.418 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.418 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.840 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.859 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.875 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.875 > git config core.sparsecheckout # timeout=10 00:00:05.889 > git read-tree -mu HEAD # timeout=10 00:00:05.909 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.932 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.932 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.046 [Pipeline] Start of Pipeline 00:00:06.064 [Pipeline] library 00:00:06.066 Loading library shm_lib@master 00:00:06.067 Library shm_lib@master is cached. Copying from home. 00:00:06.088 [Pipeline] node 00:00:21.090 Still waiting to schedule task 00:00:21.091 Waiting for next available executor on ‘vagrant-vm-host’ 00:23:42.817 Running on VM-host-SM4 in /var/jenkins/workspace/nvme-vg-autotest 00:23:42.819 [Pipeline] { 00:23:42.831 [Pipeline] catchError 00:23:42.833 [Pipeline] { 00:23:42.848 [Pipeline] wrap 00:23:42.857 [Pipeline] { 00:23:42.865 [Pipeline] stage 00:23:42.867 [Pipeline] { (Prologue) 00:23:42.888 [Pipeline] echo 00:23:42.889 Node: VM-host-SM4 00:23:42.896 [Pipeline] cleanWs 00:23:42.905 [WS-CLEANUP] Deleting project workspace... 00:23:42.906 [WS-CLEANUP] Deferred wipeout is used... 00:23:42.913 [WS-CLEANUP] done 00:23:43.108 [Pipeline] setCustomBuildProperty 00:23:43.204 [Pipeline] httpRequest 00:23:43.534 [Pipeline] echo 00:23:43.537 Sorcerer 10.211.164.20 is alive 00:23:43.548 [Pipeline] retry 00:23:43.550 [Pipeline] { 00:23:43.565 [Pipeline] httpRequest 00:23:43.570 HttpMethod: GET 00:23:43.571 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:23:43.572 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:23:43.572 Response Code: HTTP/1.1 200 OK 00:23:43.573 Success: Status code 200 is in the accepted range: 200,404 00:23:43.574 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:23:43.863 [Pipeline] } 00:23:43.879 [Pipeline] // retry 00:23:43.886 [Pipeline] sh 00:23:44.167 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:23:44.183 [Pipeline] httpRequest 00:23:44.511 [Pipeline] echo 00:23:44.513 Sorcerer 10.211.164.20 is alive 00:23:44.523 [Pipeline] retry 00:23:44.525 [Pipeline] { 00:23:44.541 [Pipeline] httpRequest 00:23:44.546 HttpMethod: GET 00:23:44.546 URL: http://10.211.164.20/packages/spdk_f9d18d578e28928a879defa22dc91bc65c5666a7.tar.gz 00:23:44.547 Sending request to url: http://10.211.164.20/packages/spdk_f9d18d578e28928a879defa22dc91bc65c5666a7.tar.gz 00:23:44.548 Response Code: HTTP/1.1 200 OK 00:23:44.548 Success: Status code 200 is in the accepted range: 200,404 00:23:44.549 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_f9d18d578e28928a879defa22dc91bc65c5666a7.tar.gz 00:23:49.229 [Pipeline] } 00:23:49.246 [Pipeline] // retry 00:23:49.256 [Pipeline] sh 00:23:49.536 + tar --no-same-owner -xf spdk_f9d18d578e28928a879defa22dc91bc65c5666a7.tar.gz 00:23:52.886 [Pipeline] sh 00:23:53.278 + git -C spdk log --oneline -n5 00:23:53.278 f9d18d578 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:23:53.278 a361eb5e2 nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:23:53.278 4ab755590 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:23:53.278 f40c2e7bb dif: Add spdk_dif_pi_format_get_pi_size() to use for NVMe PRACT 00:23:53.278 325a79ea3 bdev/malloc: Support accel sequence when DIF is enabled 00:23:53.299 [Pipeline] writeFile 00:23:53.316 [Pipeline] sh 00:23:53.596 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:23:53.609 [Pipeline] sh 00:23:53.893 + cat autorun-spdk.conf 00:23:53.893 SPDK_RUN_FUNCTIONAL_TEST=1 00:23:53.893 SPDK_TEST_NVME=1 00:23:53.893 SPDK_TEST_FTL=1 00:23:53.893 SPDK_TEST_ISAL=1 00:23:53.893 SPDK_RUN_ASAN=1 00:23:53.893 SPDK_RUN_UBSAN=1 00:23:53.893 SPDK_TEST_XNVME=1 00:23:53.893 SPDK_TEST_NVME_FDP=1 00:23:53.893 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:23:53.901 RUN_NIGHTLY=0 00:23:53.903 [Pipeline] } 00:23:53.920 [Pipeline] // stage 00:23:53.937 [Pipeline] stage 00:23:53.939 [Pipeline] { (Run VM) 00:23:53.954 [Pipeline] sh 00:23:54.236 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:23:54.236 + echo 'Start stage prepare_nvme.sh' 00:23:54.236 Start stage prepare_nvme.sh 00:23:54.236 + [[ -n 4 ]] 00:23:54.236 + disk_prefix=ex4 00:23:54.236 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:23:54.236 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:23:54.236 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:23:54.236 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:23:54.236 ++ SPDK_TEST_NVME=1 00:23:54.236 ++ SPDK_TEST_FTL=1 00:23:54.236 ++ SPDK_TEST_ISAL=1 00:23:54.236 ++ SPDK_RUN_ASAN=1 00:23:54.236 ++ SPDK_RUN_UBSAN=1 00:23:54.236 ++ SPDK_TEST_XNVME=1 00:23:54.236 ++ SPDK_TEST_NVME_FDP=1 00:23:54.236 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:23:54.236 ++ RUN_NIGHTLY=0 00:23:54.236 + cd /var/jenkins/workspace/nvme-vg-autotest 00:23:54.236 + nvme_files=() 00:23:54.236 + declare -A nvme_files 00:23:54.236 + backend_dir=/var/lib/libvirt/images/backends 00:23:54.236 + nvme_files['nvme.img']=5G 00:23:54.236 + nvme_files['nvme-cmb.img']=5G 00:23:54.236 + nvme_files['nvme-multi0.img']=4G 00:23:54.236 + nvme_files['nvme-multi1.img']=4G 00:23:54.236 + nvme_files['nvme-multi2.img']=4G 00:23:54.236 + nvme_files['nvme-openstack.img']=8G 00:23:54.236 + nvme_files['nvme-zns.img']=5G 00:23:54.236 + (( SPDK_TEST_NVME_PMR == 1 )) 00:23:54.236 + (( SPDK_TEST_FTL == 1 )) 00:23:54.236 + nvme_files["nvme-ftl.img"]=6G 00:23:54.236 + (( SPDK_TEST_NVME_FDP == 1 )) 00:23:54.236 + nvme_files["nvme-fdp.img"]=1G 00:23:54.236 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:23:54.236 + for nvme in "${!nvme_files[@]}" 00:23:54.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:23:54.236 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:23:54.236 + for nvme in "${!nvme_files[@]}" 00:23:54.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:23:54.236 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:23:54.236 + for nvme in "${!nvme_files[@]}" 00:23:54.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:23:54.236 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:23:54.236 + for nvme in "${!nvme_files[@]}" 00:23:54.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:23:54.236 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:23:54.236 + for nvme in "${!nvme_files[@]}" 00:23:54.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:23:54.236 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:23:54.236 + for nvme in "${!nvme_files[@]}" 00:23:54.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:23:54.495 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:23:54.495 + for nvme in "${!nvme_files[@]}" 00:23:54.495 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:23:54.495 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:23:54.495 + for nvme in "${!nvme_files[@]}" 00:23:54.495 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:23:54.495 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:23:54.495 + for nvme in "${!nvme_files[@]}" 00:23:54.495 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:23:54.495 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:23:54.495 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:23:54.495 + echo 'End stage prepare_nvme.sh' 00:23:54.495 End stage prepare_nvme.sh 00:23:54.508 [Pipeline] sh 00:23:54.787 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:23:54.787 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:23:54.787 00:23:54.787 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:23:54.787 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:23:54.787 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:23:54.787 HELP=0 00:23:54.787 DRY_RUN=0 00:23:54.787 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:23:54.787 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:23:54.787 NVME_AUTO_CREATE=0 00:23:54.787 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:23:54.787 NVME_CMB=,,,, 00:23:54.787 NVME_PMR=,,,, 00:23:54.787 NVME_ZNS=,,,, 00:23:54.787 NVME_MS=true,,,, 00:23:54.787 NVME_FDP=,,,on, 00:23:54.787 SPDK_VAGRANT_DISTRO=fedora39 00:23:54.787 SPDK_VAGRANT_VMCPU=10 00:23:54.787 SPDK_VAGRANT_VMRAM=12288 00:23:54.787 SPDK_VAGRANT_PROVIDER=libvirt 00:23:54.787 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:23:54.787 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:23:54.787 SPDK_OPENSTACK_NETWORK=0 00:23:54.787 VAGRANT_PACKAGE_BOX=0 00:23:54.787 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:23:54.787 FORCE_DISTRO=true 00:23:54.787 VAGRANT_BOX_VERSION= 00:23:54.787 EXTRA_VAGRANTFILES= 00:23:54.787 NIC_MODEL=e1000 00:23:54.787 00:23:54.787 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:23:54.787 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:23:58.975 Bringing machine 'default' up with 'libvirt' provider... 00:23:59.234 ==> default: Creating image (snapshot of base box volume). 00:23:59.234 ==> default: Creating domain with the following settings... 00:23:59.234 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732110296_64e001373d85d8cdf608 00:23:59.234 ==> default: -- Domain type: kvm 00:23:59.234 ==> default: -- Cpus: 10 00:23:59.234 ==> default: -- Feature: acpi 00:23:59.234 ==> default: -- Feature: apic 00:23:59.234 ==> default: -- Feature: pae 00:23:59.234 ==> default: -- Memory: 12288M 00:23:59.234 ==> default: -- Memory Backing: hugepages: 00:23:59.234 ==> default: -- Management MAC: 00:23:59.234 ==> default: -- Loader: 00:23:59.234 ==> default: -- Nvram: 00:23:59.234 ==> default: -- Base box: spdk/fedora39 00:23:59.234 ==> default: -- Storage pool: default 00:23:59.234 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732110296_64e001373d85d8cdf608.img (20G) 00:23:59.234 ==> default: -- Volume Cache: default 00:23:59.234 ==> default: -- Kernel: 00:23:59.235 ==> default: -- Initrd: 00:23:59.235 ==> default: -- Graphics Type: vnc 00:23:59.235 ==> default: -- Graphics Port: -1 00:23:59.235 ==> default: -- Graphics IP: 127.0.0.1 00:23:59.235 ==> default: -- Graphics Password: Not defined 00:23:59.235 ==> default: -- Video Type: cirrus 00:23:59.235 ==> default: -- Video VRAM: 9216 00:23:59.235 ==> default: -- Sound Type: 00:23:59.235 ==> default: -- Keymap: en-us 00:23:59.235 ==> default: -- TPM Path: 00:23:59.235 ==> default: -- INPUT: type=mouse, bus=ps2 00:23:59.235 ==> default: -- Command line args: 00:23:59.235 ==> default: -> value=-device, 00:23:59.235 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:23:59.235 ==> default: -> value=-drive, 00:23:59.235 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:23:59.235 ==> default: -> value=-device, 00:23:59.235 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:23:59.235 ==> default: -> value=-device, 00:23:59.235 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:23:59.235 ==> default: -> value=-drive, 00:23:59.235 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:23:59.235 ==> default: -> value=-device, 00:23:59.235 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:23:59.235 ==> default: -> value=-device, 00:23:59.235 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:23:59.235 ==> default: -> value=-drive, 00:23:59.235 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:23:59.235 ==> default: -> value=-device, 00:23:59.235 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:23:59.235 ==> default: -> value=-drive, 00:23:59.235 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:23:59.235 ==> default: -> value=-device, 00:23:59.235 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:23:59.235 ==> default: -> value=-drive, 00:23:59.235 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:23:59.494 ==> default: -> value=-device, 00:23:59.494 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:23:59.494 ==> default: -> value=-device, 00:23:59.494 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:23:59.494 ==> default: -> value=-device, 00:23:59.494 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:23:59.494 ==> default: -> value=-drive, 00:23:59.494 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:23:59.494 ==> default: -> value=-device, 00:23:59.494 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:23:59.494 ==> default: Creating shared folders metadata... 00:23:59.494 ==> default: Starting domain. 00:24:01.395 ==> default: Waiting for domain to get an IP address... 00:24:16.295 ==> default: Waiting for SSH to become available... 00:24:18.194 ==> default: Configuring and enabling network interfaces... 00:24:22.433 default: SSH address: 192.168.121.111:22 00:24:22.433 default: SSH username: vagrant 00:24:22.433 default: SSH auth method: private key 00:24:24.335 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:24:34.319 ==> default: Mounting SSHFS shared folder... 00:24:34.884 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:24:34.884 ==> default: Checking Mount.. 00:24:36.259 ==> default: Folder Successfully Mounted! 00:24:36.259 ==> default: Running provisioner: file... 00:24:36.825 default: ~/.gitconfig => .gitconfig 00:24:37.393 00:24:37.393 SUCCESS! 00:24:37.393 00:24:37.393 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:24:37.393 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:24:37.394 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:24:37.394 00:24:37.403 [Pipeline] } 00:24:37.419 [Pipeline] // stage 00:24:37.429 [Pipeline] dir 00:24:37.430 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:24:37.432 [Pipeline] { 00:24:37.447 [Pipeline] catchError 00:24:37.449 [Pipeline] { 00:24:37.461 [Pipeline] sh 00:24:37.743 + vagrant ssh-config --host vagrant 00:24:37.743 + sed -ne /^Host/,$p 00:24:37.743 + tee ssh_conf 00:24:41.032 Host vagrant 00:24:41.032 HostName 192.168.121.111 00:24:41.032 User vagrant 00:24:41.032 Port 22 00:24:41.033 UserKnownHostsFile /dev/null 00:24:41.033 StrictHostKeyChecking no 00:24:41.033 PasswordAuthentication no 00:24:41.033 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:24:41.033 IdentitiesOnly yes 00:24:41.033 LogLevel FATAL 00:24:41.033 ForwardAgent yes 00:24:41.033 ForwardX11 yes 00:24:41.033 00:24:41.047 [Pipeline] withEnv 00:24:41.049 [Pipeline] { 00:24:41.065 [Pipeline] sh 00:24:41.347 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:24:41.347 source /etc/os-release 00:24:41.347 [[ -e /image.version ]] && img=$(< /image.version) 00:24:41.347 # Minimal, systemd-like check. 00:24:41.347 if [[ -e /.dockerenv ]]; then 00:24:41.347 # Clear garbage from the node's name: 00:24:41.347 # agt-er_autotest_547-896 -> autotest_547-896 00:24:41.347 # $HOSTNAME is the actual container id 00:24:41.347 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:24:41.347 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:24:41.347 # We can assume this is a mount from a host where container is running, 00:24:41.347 # so fetch its hostname to easily identify the target swarm worker. 00:24:41.347 container="$(< /etc/hostname) ($agent)" 00:24:41.347 else 00:24:41.347 # Fallback 00:24:41.347 container=$agent 00:24:41.347 fi 00:24:41.347 fi 00:24:41.347 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:24:41.347 00:24:41.615 [Pipeline] } 00:24:41.632 [Pipeline] // withEnv 00:24:41.641 [Pipeline] setCustomBuildProperty 00:24:41.660 [Pipeline] stage 00:24:41.663 [Pipeline] { (Tests) 00:24:41.681 [Pipeline] sh 00:24:41.959 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:24:41.973 [Pipeline] sh 00:24:42.254 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:24:42.526 [Pipeline] timeout 00:24:42.526 Timeout set to expire in 50 min 00:24:42.529 [Pipeline] { 00:24:42.544 [Pipeline] sh 00:24:42.822 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:24:43.391 HEAD is now at f9d18d578 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:24:43.403 [Pipeline] sh 00:24:43.766 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:24:43.780 [Pipeline] sh 00:24:44.060 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:24:44.335 [Pipeline] sh 00:24:44.616 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:24:44.875 ++ readlink -f spdk_repo 00:24:44.875 + DIR_ROOT=/home/vagrant/spdk_repo 00:24:44.875 + [[ -n /home/vagrant/spdk_repo ]] 00:24:44.875 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:24:44.875 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:24:44.875 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:24:44.875 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:24:44.875 + [[ -d /home/vagrant/spdk_repo/output ]] 00:24:44.875 + [[ nvme-vg-autotest == pkgdep-* ]] 00:24:44.875 + cd /home/vagrant/spdk_repo 00:24:44.875 + source /etc/os-release 00:24:44.875 ++ NAME='Fedora Linux' 00:24:44.875 ++ VERSION='39 (Cloud Edition)' 00:24:44.875 ++ ID=fedora 00:24:44.875 ++ VERSION_ID=39 00:24:44.875 ++ VERSION_CODENAME= 00:24:44.875 ++ PLATFORM_ID=platform:f39 00:24:44.875 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:24:44.875 ++ ANSI_COLOR='0;38;2;60;110;180' 00:24:44.875 ++ LOGO=fedora-logo-icon 00:24:44.875 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:24:44.875 ++ HOME_URL=https://fedoraproject.org/ 00:24:44.875 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:24:44.875 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:24:44.875 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:24:44.875 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:24:44.875 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:24:44.875 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:24:44.875 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:24:44.875 ++ SUPPORT_END=2024-11-12 00:24:44.875 ++ VARIANT='Cloud Edition' 00:24:44.875 ++ VARIANT_ID=cloud 00:24:44.875 + uname -a 00:24:44.875 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:24:44.875 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:24:45.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:45.392 Hugepages 00:24:45.392 node hugesize free / total 00:24:45.392 node0 1048576kB 0 / 0 00:24:45.392 node0 2048kB 0 / 0 00:24:45.392 00:24:45.392 Type BDF Vendor Device NUMA Driver Device Block devices 00:24:45.392 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:24:45.392 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:24:45.650 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:24:45.650 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:24:45.650 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:24:45.650 + rm -f /tmp/spdk-ld-path 00:24:45.650 + source autorun-spdk.conf 00:24:45.650 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:24:45.650 ++ SPDK_TEST_NVME=1 00:24:45.650 ++ SPDK_TEST_FTL=1 00:24:45.651 ++ SPDK_TEST_ISAL=1 00:24:45.651 ++ SPDK_RUN_ASAN=1 00:24:45.651 ++ SPDK_RUN_UBSAN=1 00:24:45.651 ++ SPDK_TEST_XNVME=1 00:24:45.651 ++ SPDK_TEST_NVME_FDP=1 00:24:45.651 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:24:45.651 ++ RUN_NIGHTLY=0 00:24:45.651 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:24:45.651 + [[ -n '' ]] 00:24:45.651 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:24:45.651 + for M in /var/spdk/build-*-manifest.txt 00:24:45.651 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:24:45.651 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:24:45.651 + for M in /var/spdk/build-*-manifest.txt 00:24:45.651 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:24:45.651 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:24:45.651 + for M in /var/spdk/build-*-manifest.txt 00:24:45.651 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:24:45.651 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:24:45.651 ++ uname 00:24:45.651 + [[ Linux == \L\i\n\u\x ]] 00:24:45.651 + sudo dmesg -T 00:24:45.651 + sudo dmesg --clear 00:24:45.651 + dmesg_pid=5291 00:24:45.651 + sudo dmesg -Tw 00:24:45.651 + [[ Fedora Linux == FreeBSD ]] 00:24:45.651 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:45.651 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:45.651 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:24:45.651 + [[ -x /usr/src/fio-static/fio ]] 00:24:45.651 + export FIO_BIN=/usr/src/fio-static/fio 00:24:45.651 + FIO_BIN=/usr/src/fio-static/fio 00:24:45.651 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:24:45.651 + [[ ! -v VFIO_QEMU_BIN ]] 00:24:45.651 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:24:45.651 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:45.651 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:45.651 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:24:45.651 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:45.651 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:45.651 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:24:45.651 13:45:42 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:24:45.651 13:45:42 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:24:45.910 13:45:42 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:24:45.910 13:45:42 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:24:45.910 13:45:42 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:24:45.910 13:45:42 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:24:45.910 13:45:42 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:24:45.910 13:45:42 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:24:45.910 13:45:42 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:24:45.910 13:45:42 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:24:45.910 13:45:42 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:24:45.910 13:45:42 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:24:45.910 13:45:42 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:24:45.910 13:45:42 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:24:45.910 13:45:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:24:45.910 13:45:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:45.910 13:45:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:24:45.910 13:45:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:45.910 13:45:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.910 13:45:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.910 13:45:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.910 13:45:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.910 13:45:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.910 13:45:43 -- paths/export.sh@5 -- $ export PATH 00:24:45.910 13:45:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.910 13:45:43 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:24:45.910 13:45:43 -- common/autobuild_common.sh@493 -- $ date +%s 00:24:45.910 13:45:43 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732110343.XXXXXX 00:24:45.910 13:45:43 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732110343.I8ZC73 00:24:45.910 13:45:43 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:24:45.910 13:45:43 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:24:45.910 13:45:43 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:24:45.910 13:45:43 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:24:45.910 13:45:43 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:24:45.910 13:45:43 -- common/autobuild_common.sh@509 -- $ get_config_params 00:24:45.910 13:45:43 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:24:45.910 13:45:43 -- common/autotest_common.sh@10 -- $ set +x 00:24:45.910 13:45:43 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:24:45.910 13:45:43 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:24:45.910 13:45:43 -- pm/common@17 -- $ local monitor 00:24:45.910 13:45:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:45.910 13:45:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:45.910 13:45:43 -- pm/common@25 -- $ sleep 1 00:24:45.910 13:45:43 -- pm/common@21 -- $ date +%s 00:24:45.910 13:45:43 -- pm/common@21 -- $ date +%s 00:24:45.910 13:45:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732110343 00:24:45.910 13:45:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732110343 00:24:45.910 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732110343_collect-cpu-load.pm.log 00:24:45.910 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732110343_collect-vmstat.pm.log 00:24:46.843 13:45:44 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:24:46.843 13:45:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:24:46.843 13:45:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:24:46.843 13:45:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:24:46.843 13:45:44 -- spdk/autobuild.sh@16 -- $ date -u 00:24:46.843 Wed Nov 20 01:45:44 PM UTC 2024 00:24:46.843 13:45:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:24:46.843 v25.01-pre-249-gf9d18d578 00:24:46.843 13:45:44 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:24:46.843 13:45:44 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:24:46.843 13:45:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:24:46.843 13:45:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:24:46.843 13:45:44 -- common/autotest_common.sh@10 -- $ set +x 00:24:46.843 ************************************ 00:24:46.843 START TEST asan 00:24:46.843 ************************************ 00:24:46.843 using asan 00:24:46.843 13:45:44 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:24:46.843 00:24:46.843 real 0m0.000s 00:24:46.843 user 0m0.000s 00:24:46.843 sys 0m0.000s 00:24:46.843 13:45:44 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:24:46.843 13:45:44 asan -- common/autotest_common.sh@10 -- $ set +x 00:24:46.843 ************************************ 00:24:46.843 END TEST asan 00:24:46.843 ************************************ 00:24:46.843 13:45:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:24:46.843 13:45:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:24:46.843 13:45:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:24:46.843 13:45:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:24:46.843 13:45:44 -- common/autotest_common.sh@10 -- $ set +x 00:24:46.843 ************************************ 00:24:46.843 START TEST ubsan 00:24:46.843 ************************************ 00:24:46.843 using ubsan 00:24:46.843 13:45:44 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:24:46.843 00:24:46.843 real 0m0.000s 00:24:46.843 user 0m0.000s 00:24:46.843 sys 0m0.000s 00:24:46.843 13:45:44 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:24:46.843 13:45:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:24:46.843 ************************************ 00:24:46.843 END TEST ubsan 00:24:46.843 ************************************ 00:24:47.101 13:45:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:24:47.101 13:45:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:24:47.101 13:45:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:24:47.101 13:45:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:24:47.102 13:45:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:24:47.102 13:45:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:24:47.102 13:45:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:24:47.102 13:45:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:24:47.102 13:45:44 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:24:47.102 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:47.102 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:24:47.666 Using 'verbs' RDMA provider 00:25:03.555 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:25:15.762 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:25:16.589 Creating mk/config.mk...done. 00:25:16.589 Creating mk/cc.flags.mk...done. 00:25:16.589 Type 'make' to build. 00:25:16.589 13:46:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:25:16.589 13:46:13 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:25:16.589 13:46:13 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:25:16.589 13:46:13 -- common/autotest_common.sh@10 -- $ set +x 00:25:16.589 ************************************ 00:25:16.589 START TEST make 00:25:16.589 ************************************ 00:25:16.589 13:46:13 make -- common/autotest_common.sh@1129 -- $ make -j10 00:25:16.848 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:25:16.848 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:25:16.848 meson setup builddir \ 00:25:16.848 -Dwith-libaio=enabled \ 00:25:16.848 -Dwith-liburing=enabled \ 00:25:16.848 -Dwith-libvfn=disabled \ 00:25:16.848 -Dwith-spdk=disabled \ 00:25:16.848 -Dexamples=false \ 00:25:16.848 -Dtests=false \ 00:25:16.848 -Dtools=false && \ 00:25:16.848 meson compile -C builddir && \ 00:25:16.848 cd -) 00:25:16.848 make[1]: Nothing to be done for 'all'. 00:25:20.137 The Meson build system 00:25:20.137 Version: 1.5.0 00:25:20.137 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:25:20.137 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:25:20.137 Build type: native build 00:25:20.137 Project name: xnvme 00:25:20.137 Project version: 0.7.5 00:25:20.137 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:25:20.137 C linker for the host machine: cc ld.bfd 2.40-14 00:25:20.137 Host machine cpu family: x86_64 00:25:20.137 Host machine cpu: x86_64 00:25:20.137 Message: host_machine.system: linux 00:25:20.137 Compiler for C supports arguments -Wno-missing-braces: YES 00:25:20.137 Compiler for C supports arguments -Wno-cast-function-type: YES 00:25:20.137 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:25:20.137 Run-time dependency threads found: YES 00:25:20.137 Has header "setupapi.h" : NO 00:25:20.137 Has header "linux/blkzoned.h" : YES 00:25:20.137 Has header "linux/blkzoned.h" : YES (cached) 00:25:20.137 Has header "libaio.h" : YES 00:25:20.137 Library aio found: YES 00:25:20.137 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:25:20.137 Run-time dependency liburing found: YES 2.2 00:25:20.137 Dependency libvfn skipped: feature with-libvfn disabled 00:25:20.137 Found CMake: /usr/bin/cmake (3.27.7) 00:25:20.137 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:25:20.137 Subproject spdk : skipped: feature with-spdk disabled 00:25:20.137 Run-time dependency appleframeworks found: NO (tried framework) 00:25:20.138 Run-time dependency appleframeworks found: NO (tried framework) 00:25:20.138 Library rt found: YES 00:25:20.138 Checking for function "clock_gettime" with dependency -lrt: YES 00:25:20.138 Configuring xnvme_config.h using configuration 00:25:20.138 Configuring xnvme.spec using configuration 00:25:20.138 Run-time dependency bash-completion found: YES 2.11 00:25:20.138 Message: Bash-completions: /usr/share/bash-completion/completions 00:25:20.138 Program cp found: YES (/usr/bin/cp) 00:25:20.138 Build targets in project: 3 00:25:20.138 00:25:20.138 xnvme 0.7.5 00:25:20.138 00:25:20.138 Subprojects 00:25:20.138 spdk : NO Feature 'with-spdk' disabled 00:25:20.138 00:25:20.138 User defined options 00:25:20.138 examples : false 00:25:20.138 tests : false 00:25:20.138 tools : false 00:25:20.138 with-libaio : enabled 00:25:20.138 with-liburing: enabled 00:25:20.138 with-libvfn : disabled 00:25:20.138 with-spdk : disabled 00:25:20.138 00:25:20.138 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:25:20.396 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:25:20.396 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:25:20.396 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:25:20.396 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:25:20.396 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:25:20.396 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:25:20.396 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:25:20.396 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:25:20.396 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:25:20.396 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:25:20.396 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:25:20.654 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:25:20.654 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:25:20.654 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:25:20.654 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:25:20.654 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:25:20.654 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:25:20.654 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:25:20.654 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:25:20.654 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:25:20.654 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:25:20.654 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:25:20.654 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:25:20.654 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:25:20.654 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:25:20.654 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:25:20.654 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:25:20.654 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:25:20.654 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:25:20.654 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:25:20.654 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:25:20.654 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:25:20.913 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:25:20.913 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:25:20.913 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:25:20.913 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:25:20.913 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:25:20.913 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:25:20.913 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:25:20.913 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:25:20.913 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:25:20.913 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:25:20.913 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:25:20.913 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:25:20.913 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:25:20.913 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:25:20.913 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:25:20.913 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:25:20.913 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:25:20.913 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:25:20.913 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:25:20.913 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:25:20.913 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:25:20.913 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:25:21.171 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:25:21.171 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:25:21.171 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:25:21.171 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:25:21.171 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:25:21.171 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:25:21.171 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:25:21.171 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:25:21.171 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:25:21.171 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:25:21.171 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:25:21.429 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:25:21.429 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:25:21.429 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:25:21.429 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:25:21.429 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:25:21.429 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:25:21.429 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:25:21.429 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:25:21.429 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:25:21.997 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:25:21.997 [75/76] Linking static target lib/libxnvme.a 00:25:21.997 [76/76] Linking target lib/libxnvme.so.0.7.5 00:25:21.997 INFO: autodetecting backend as ninja 00:25:21.997 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:25:21.997 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:25:32.027 The Meson build system 00:25:32.027 Version: 1.5.0 00:25:32.027 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:25:32.027 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:25:32.027 Build type: native build 00:25:32.027 Program cat found: YES (/usr/bin/cat) 00:25:32.027 Project name: DPDK 00:25:32.027 Project version: 24.03.0 00:25:32.027 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:25:32.027 C linker for the host machine: cc ld.bfd 2.40-14 00:25:32.027 Host machine cpu family: x86_64 00:25:32.027 Host machine cpu: x86_64 00:25:32.027 Message: ## Building in Developer Mode ## 00:25:32.027 Program pkg-config found: YES (/usr/bin/pkg-config) 00:25:32.027 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:25:32.027 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:25:32.027 Program python3 found: YES (/usr/bin/python3) 00:25:32.027 Program cat found: YES (/usr/bin/cat) 00:25:32.027 Compiler for C supports arguments -march=native: YES 00:25:32.027 Checking for size of "void *" : 8 00:25:32.027 Checking for size of "void *" : 8 (cached) 00:25:32.027 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:25:32.027 Library m found: YES 00:25:32.027 Library numa found: YES 00:25:32.027 Has header "numaif.h" : YES 00:25:32.027 Library fdt found: NO 00:25:32.027 Library execinfo found: NO 00:25:32.027 Has header "execinfo.h" : YES 00:25:32.027 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:25:32.027 Run-time dependency libarchive found: NO (tried pkgconfig) 00:25:32.027 Run-time dependency libbsd found: NO (tried pkgconfig) 00:25:32.027 Run-time dependency jansson found: NO (tried pkgconfig) 00:25:32.027 Run-time dependency openssl found: YES 3.1.1 00:25:32.027 Run-time dependency libpcap found: YES 1.10.4 00:25:32.027 Has header "pcap.h" with dependency libpcap: YES 00:25:32.027 Compiler for C supports arguments -Wcast-qual: YES 00:25:32.027 Compiler for C supports arguments -Wdeprecated: YES 00:25:32.027 Compiler for C supports arguments -Wformat: YES 00:25:32.027 Compiler for C supports arguments -Wformat-nonliteral: NO 00:25:32.027 Compiler for C supports arguments -Wformat-security: NO 00:25:32.027 Compiler for C supports arguments -Wmissing-declarations: YES 00:25:32.027 Compiler for C supports arguments -Wmissing-prototypes: YES 00:25:32.027 Compiler for C supports arguments -Wnested-externs: YES 00:25:32.027 Compiler for C supports arguments -Wold-style-definition: YES 00:25:32.027 Compiler for C supports arguments -Wpointer-arith: YES 00:25:32.027 Compiler for C supports arguments -Wsign-compare: YES 00:25:32.027 Compiler for C supports arguments -Wstrict-prototypes: YES 00:25:32.027 Compiler for C supports arguments -Wundef: YES 00:25:32.027 Compiler for C supports arguments -Wwrite-strings: YES 00:25:32.027 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:25:32.027 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:25:32.027 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:25:32.027 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:25:32.027 Program objdump found: YES (/usr/bin/objdump) 00:25:32.027 Compiler for C supports arguments -mavx512f: YES 00:25:32.027 Checking if "AVX512 checking" compiles: YES 00:25:32.027 Fetching value of define "__SSE4_2__" : 1 00:25:32.027 Fetching value of define "__AES__" : 1 00:25:32.027 Fetching value of define "__AVX__" : 1 00:25:32.027 Fetching value of define "__AVX2__" : 1 00:25:32.027 Fetching value of define "__AVX512BW__" : 1 00:25:32.027 Fetching value of define "__AVX512CD__" : 1 00:25:32.027 Fetching value of define "__AVX512DQ__" : 1 00:25:32.027 Fetching value of define "__AVX512F__" : 1 00:25:32.027 Fetching value of define "__AVX512VL__" : 1 00:25:32.028 Fetching value of define "__PCLMUL__" : 1 00:25:32.028 Fetching value of define "__RDRND__" : 1 00:25:32.028 Fetching value of define "__RDSEED__" : 1 00:25:32.028 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:25:32.028 Fetching value of define "__znver1__" : (undefined) 00:25:32.028 Fetching value of define "__znver2__" : (undefined) 00:25:32.028 Fetching value of define "__znver3__" : (undefined) 00:25:32.028 Fetching value of define "__znver4__" : (undefined) 00:25:32.028 Library asan found: YES 00:25:32.028 Compiler for C supports arguments -Wno-format-truncation: YES 00:25:32.028 Message: lib/log: Defining dependency "log" 00:25:32.028 Message: lib/kvargs: Defining dependency "kvargs" 00:25:32.028 Message: lib/telemetry: Defining dependency "telemetry" 00:25:32.028 Library rt found: YES 00:25:32.028 Checking for function "getentropy" : NO 00:25:32.028 Message: lib/eal: Defining dependency "eal" 00:25:32.028 Message: lib/ring: Defining dependency "ring" 00:25:32.028 Message: lib/rcu: Defining dependency "rcu" 00:25:32.028 Message: lib/mempool: Defining dependency "mempool" 00:25:32.028 Message: lib/mbuf: Defining dependency "mbuf" 00:25:32.028 Fetching value of define "__PCLMUL__" : 1 (cached) 00:25:32.028 Fetching value of define "__AVX512F__" : 1 (cached) 00:25:32.028 Fetching value of define "__AVX512BW__" : 1 (cached) 00:25:32.028 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:25:32.028 Fetching value of define "__AVX512VL__" : 1 (cached) 00:25:32.028 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:25:32.028 Compiler for C supports arguments -mpclmul: YES 00:25:32.028 Compiler for C supports arguments -maes: YES 00:25:32.028 Compiler for C supports arguments -mavx512f: YES (cached) 00:25:32.028 Compiler for C supports arguments -mavx512bw: YES 00:25:32.028 Compiler for C supports arguments -mavx512dq: YES 00:25:32.028 Compiler for C supports arguments -mavx512vl: YES 00:25:32.028 Compiler for C supports arguments -mvpclmulqdq: YES 00:25:32.028 Compiler for C supports arguments -mavx2: YES 00:25:32.028 Compiler for C supports arguments -mavx: YES 00:25:32.028 Message: lib/net: Defining dependency "net" 00:25:32.028 Message: lib/meter: Defining dependency "meter" 00:25:32.028 Message: lib/ethdev: Defining dependency "ethdev" 00:25:32.028 Message: lib/pci: Defining dependency "pci" 00:25:32.028 Message: lib/cmdline: Defining dependency "cmdline" 00:25:32.028 Message: lib/hash: Defining dependency "hash" 00:25:32.028 Message: lib/timer: Defining dependency "timer" 00:25:32.028 Message: lib/compressdev: Defining dependency "compressdev" 00:25:32.028 Message: lib/cryptodev: Defining dependency "cryptodev" 00:25:32.028 Message: lib/dmadev: Defining dependency "dmadev" 00:25:32.028 Compiler for C supports arguments -Wno-cast-qual: YES 00:25:32.028 Message: lib/power: Defining dependency "power" 00:25:32.028 Message: lib/reorder: Defining dependency "reorder" 00:25:32.028 Message: lib/security: Defining dependency "security" 00:25:32.028 Has header "linux/userfaultfd.h" : YES 00:25:32.028 Has header "linux/vduse.h" : YES 00:25:32.028 Message: lib/vhost: Defining dependency "vhost" 00:25:32.028 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:25:32.028 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:25:32.028 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:25:32.028 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:25:32.028 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:25:32.028 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:25:32.028 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:25:32.028 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:25:32.028 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:25:32.028 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:25:32.028 Program doxygen found: YES (/usr/local/bin/doxygen) 00:25:32.028 Configuring doxy-api-html.conf using configuration 00:25:32.028 Configuring doxy-api-man.conf using configuration 00:25:32.028 Program mandb found: YES (/usr/bin/mandb) 00:25:32.028 Program sphinx-build found: NO 00:25:32.028 Configuring rte_build_config.h using configuration 00:25:32.028 Message: 00:25:32.028 ================= 00:25:32.028 Applications Enabled 00:25:32.028 ================= 00:25:32.028 00:25:32.028 apps: 00:25:32.028 00:25:32.028 00:25:32.028 Message: 00:25:32.028 ================= 00:25:32.028 Libraries Enabled 00:25:32.028 ================= 00:25:32.028 00:25:32.028 libs: 00:25:32.028 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:25:32.028 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:25:32.028 cryptodev, dmadev, power, reorder, security, vhost, 00:25:32.028 00:25:32.028 Message: 00:25:32.028 =============== 00:25:32.028 Drivers Enabled 00:25:32.028 =============== 00:25:32.028 00:25:32.028 common: 00:25:32.028 00:25:32.028 bus: 00:25:32.028 pci, vdev, 00:25:32.028 mempool: 00:25:32.028 ring, 00:25:32.028 dma: 00:25:32.028 00:25:32.028 net: 00:25:32.028 00:25:32.028 crypto: 00:25:32.028 00:25:32.028 compress: 00:25:32.028 00:25:32.028 vdpa: 00:25:32.028 00:25:32.028 00:25:32.028 Message: 00:25:32.028 ================= 00:25:32.028 Content Skipped 00:25:32.028 ================= 00:25:32.028 00:25:32.028 apps: 00:25:32.028 dumpcap: explicitly disabled via build config 00:25:32.028 graph: explicitly disabled via build config 00:25:32.028 pdump: explicitly disabled via build config 00:25:32.028 proc-info: explicitly disabled via build config 00:25:32.028 test-acl: explicitly disabled via build config 00:25:32.028 test-bbdev: explicitly disabled via build config 00:25:32.028 test-cmdline: explicitly disabled via build config 00:25:32.028 test-compress-perf: explicitly disabled via build config 00:25:32.028 test-crypto-perf: explicitly disabled via build config 00:25:32.028 test-dma-perf: explicitly disabled via build config 00:25:32.028 test-eventdev: explicitly disabled via build config 00:25:32.028 test-fib: explicitly disabled via build config 00:25:32.028 test-flow-perf: explicitly disabled via build config 00:25:32.028 test-gpudev: explicitly disabled via build config 00:25:32.028 test-mldev: explicitly disabled via build config 00:25:32.028 test-pipeline: explicitly disabled via build config 00:25:32.028 test-pmd: explicitly disabled via build config 00:25:32.028 test-regex: explicitly disabled via build config 00:25:32.028 test-sad: explicitly disabled via build config 00:25:32.028 test-security-perf: explicitly disabled via build config 00:25:32.028 00:25:32.028 libs: 00:25:32.028 argparse: explicitly disabled via build config 00:25:32.028 metrics: explicitly disabled via build config 00:25:32.028 acl: explicitly disabled via build config 00:25:32.028 bbdev: explicitly disabled via build config 00:25:32.028 bitratestats: explicitly disabled via build config 00:25:32.028 bpf: explicitly disabled via build config 00:25:32.028 cfgfile: explicitly disabled via build config 00:25:32.028 distributor: explicitly disabled via build config 00:25:32.028 efd: explicitly disabled via build config 00:25:32.028 eventdev: explicitly disabled via build config 00:25:32.028 dispatcher: explicitly disabled via build config 00:25:32.028 gpudev: explicitly disabled via build config 00:25:32.028 gro: explicitly disabled via build config 00:25:32.028 gso: explicitly disabled via build config 00:25:32.028 ip_frag: explicitly disabled via build config 00:25:32.028 jobstats: explicitly disabled via build config 00:25:32.028 latencystats: explicitly disabled via build config 00:25:32.028 lpm: explicitly disabled via build config 00:25:32.028 member: explicitly disabled via build config 00:25:32.028 pcapng: explicitly disabled via build config 00:25:32.028 rawdev: explicitly disabled via build config 00:25:32.028 regexdev: explicitly disabled via build config 00:25:32.028 mldev: explicitly disabled via build config 00:25:32.028 rib: explicitly disabled via build config 00:25:32.028 sched: explicitly disabled via build config 00:25:32.028 stack: explicitly disabled via build config 00:25:32.028 ipsec: explicitly disabled via build config 00:25:32.028 pdcp: explicitly disabled via build config 00:25:32.028 fib: explicitly disabled via build config 00:25:32.028 port: explicitly disabled via build config 00:25:32.028 pdump: explicitly disabled via build config 00:25:32.028 table: explicitly disabled via build config 00:25:32.028 pipeline: explicitly disabled via build config 00:25:32.028 graph: explicitly disabled via build config 00:25:32.028 node: explicitly disabled via build config 00:25:32.028 00:25:32.028 drivers: 00:25:32.028 common/cpt: not in enabled drivers build config 00:25:32.028 common/dpaax: not in enabled drivers build config 00:25:32.028 common/iavf: not in enabled drivers build config 00:25:32.028 common/idpf: not in enabled drivers build config 00:25:32.028 common/ionic: not in enabled drivers build config 00:25:32.028 common/mvep: not in enabled drivers build config 00:25:32.028 common/octeontx: not in enabled drivers build config 00:25:32.028 bus/auxiliary: not in enabled drivers build config 00:25:32.028 bus/cdx: not in enabled drivers build config 00:25:32.028 bus/dpaa: not in enabled drivers build config 00:25:32.028 bus/fslmc: not in enabled drivers build config 00:25:32.028 bus/ifpga: not in enabled drivers build config 00:25:32.028 bus/platform: not in enabled drivers build config 00:25:32.028 bus/uacce: not in enabled drivers build config 00:25:32.028 bus/vmbus: not in enabled drivers build config 00:25:32.028 common/cnxk: not in enabled drivers build config 00:25:32.028 common/mlx5: not in enabled drivers build config 00:25:32.028 common/nfp: not in enabled drivers build config 00:25:32.028 common/nitrox: not in enabled drivers build config 00:25:32.028 common/qat: not in enabled drivers build config 00:25:32.028 common/sfc_efx: not in enabled drivers build config 00:25:32.028 mempool/bucket: not in enabled drivers build config 00:25:32.028 mempool/cnxk: not in enabled drivers build config 00:25:32.028 mempool/dpaa: not in enabled drivers build config 00:25:32.028 mempool/dpaa2: not in enabled drivers build config 00:25:32.028 mempool/octeontx: not in enabled drivers build config 00:25:32.028 mempool/stack: not in enabled drivers build config 00:25:32.028 dma/cnxk: not in enabled drivers build config 00:25:32.028 dma/dpaa: not in enabled drivers build config 00:25:32.028 dma/dpaa2: not in enabled drivers build config 00:25:32.028 dma/hisilicon: not in enabled drivers build config 00:25:32.028 dma/idxd: not in enabled drivers build config 00:25:32.028 dma/ioat: not in enabled drivers build config 00:25:32.028 dma/skeleton: not in enabled drivers build config 00:25:32.028 net/af_packet: not in enabled drivers build config 00:25:32.028 net/af_xdp: not in enabled drivers build config 00:25:32.028 net/ark: not in enabled drivers build config 00:25:32.028 net/atlantic: not in enabled drivers build config 00:25:32.028 net/avp: not in enabled drivers build config 00:25:32.028 net/axgbe: not in enabled drivers build config 00:25:32.028 net/bnx2x: not in enabled drivers build config 00:25:32.028 net/bnxt: not in enabled drivers build config 00:25:32.028 net/bonding: not in enabled drivers build config 00:25:32.028 net/cnxk: not in enabled drivers build config 00:25:32.028 net/cpfl: not in enabled drivers build config 00:25:32.028 net/cxgbe: not in enabled drivers build config 00:25:32.028 net/dpaa: not in enabled drivers build config 00:25:32.028 net/dpaa2: not in enabled drivers build config 00:25:32.028 net/e1000: not in enabled drivers build config 00:25:32.028 net/ena: not in enabled drivers build config 00:25:32.028 net/enetc: not in enabled drivers build config 00:25:32.028 net/enetfec: not in enabled drivers build config 00:25:32.028 net/enic: not in enabled drivers build config 00:25:32.028 net/failsafe: not in enabled drivers build config 00:25:32.028 net/fm10k: not in enabled drivers build config 00:25:32.028 net/gve: not in enabled drivers build config 00:25:32.028 net/hinic: not in enabled drivers build config 00:25:32.028 net/hns3: not in enabled drivers build config 00:25:32.028 net/i40e: not in enabled drivers build config 00:25:32.028 net/iavf: not in enabled drivers build config 00:25:32.028 net/ice: not in enabled drivers build config 00:25:32.028 net/idpf: not in enabled drivers build config 00:25:32.028 net/igc: not in enabled drivers build config 00:25:32.029 net/ionic: not in enabled drivers build config 00:25:32.029 net/ipn3ke: not in enabled drivers build config 00:25:32.029 net/ixgbe: not in enabled drivers build config 00:25:32.029 net/mana: not in enabled drivers build config 00:25:32.029 net/memif: not in enabled drivers build config 00:25:32.029 net/mlx4: not in enabled drivers build config 00:25:32.029 net/mlx5: not in enabled drivers build config 00:25:32.029 net/mvneta: not in enabled drivers build config 00:25:32.029 net/mvpp2: not in enabled drivers build config 00:25:32.029 net/netvsc: not in enabled drivers build config 00:25:32.029 net/nfb: not in enabled drivers build config 00:25:32.029 net/nfp: not in enabled drivers build config 00:25:32.029 net/ngbe: not in enabled drivers build config 00:25:32.029 net/null: not in enabled drivers build config 00:25:32.029 net/octeontx: not in enabled drivers build config 00:25:32.029 net/octeon_ep: not in enabled drivers build config 00:25:32.029 net/pcap: not in enabled drivers build config 00:25:32.029 net/pfe: not in enabled drivers build config 00:25:32.029 net/qede: not in enabled drivers build config 00:25:32.029 net/ring: not in enabled drivers build config 00:25:32.029 net/sfc: not in enabled drivers build config 00:25:32.029 net/softnic: not in enabled drivers build config 00:25:32.029 net/tap: not in enabled drivers build config 00:25:32.029 net/thunderx: not in enabled drivers build config 00:25:32.029 net/txgbe: not in enabled drivers build config 00:25:32.029 net/vdev_netvsc: not in enabled drivers build config 00:25:32.029 net/vhost: not in enabled drivers build config 00:25:32.029 net/virtio: not in enabled drivers build config 00:25:32.029 net/vmxnet3: not in enabled drivers build config 00:25:32.029 raw/*: missing internal dependency, "rawdev" 00:25:32.029 crypto/armv8: not in enabled drivers build config 00:25:32.029 crypto/bcmfs: not in enabled drivers build config 00:25:32.029 crypto/caam_jr: not in enabled drivers build config 00:25:32.029 crypto/ccp: not in enabled drivers build config 00:25:32.029 crypto/cnxk: not in enabled drivers build config 00:25:32.029 crypto/dpaa_sec: not in enabled drivers build config 00:25:32.029 crypto/dpaa2_sec: not in enabled drivers build config 00:25:32.029 crypto/ipsec_mb: not in enabled drivers build config 00:25:32.029 crypto/mlx5: not in enabled drivers build config 00:25:32.029 crypto/mvsam: not in enabled drivers build config 00:25:32.029 crypto/nitrox: not in enabled drivers build config 00:25:32.029 crypto/null: not in enabled drivers build config 00:25:32.029 crypto/octeontx: not in enabled drivers build config 00:25:32.029 crypto/openssl: not in enabled drivers build config 00:25:32.029 crypto/scheduler: not in enabled drivers build config 00:25:32.029 crypto/uadk: not in enabled drivers build config 00:25:32.029 crypto/virtio: not in enabled drivers build config 00:25:32.029 compress/isal: not in enabled drivers build config 00:25:32.029 compress/mlx5: not in enabled drivers build config 00:25:32.029 compress/nitrox: not in enabled drivers build config 00:25:32.029 compress/octeontx: not in enabled drivers build config 00:25:32.029 compress/zlib: not in enabled drivers build config 00:25:32.029 regex/*: missing internal dependency, "regexdev" 00:25:32.029 ml/*: missing internal dependency, "mldev" 00:25:32.029 vdpa/ifc: not in enabled drivers build config 00:25:32.029 vdpa/mlx5: not in enabled drivers build config 00:25:32.029 vdpa/nfp: not in enabled drivers build config 00:25:32.029 vdpa/sfc: not in enabled drivers build config 00:25:32.029 event/*: missing internal dependency, "eventdev" 00:25:32.029 baseband/*: missing internal dependency, "bbdev" 00:25:32.029 gpu/*: missing internal dependency, "gpudev" 00:25:32.029 00:25:32.029 00:25:32.029 Build targets in project: 85 00:25:32.029 00:25:32.029 DPDK 24.03.0 00:25:32.029 00:25:32.029 User defined options 00:25:32.029 buildtype : debug 00:25:32.029 default_library : shared 00:25:32.029 libdir : lib 00:25:32.029 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:25:32.029 b_sanitize : address 00:25:32.029 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:25:32.029 c_link_args : 00:25:32.029 cpu_instruction_set: native 00:25:32.029 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:25:32.029 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:25:32.029 enable_docs : false 00:25:32.029 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:25:32.029 enable_kmods : false 00:25:32.029 max_lcores : 128 00:25:32.029 tests : false 00:25:32.029 00:25:32.029 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:25:32.029 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:25:32.029 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:25:32.029 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:25:32.029 [3/268] Linking static target lib/librte_log.a 00:25:32.029 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:25:32.029 [5/268] Linking static target lib/librte_kvargs.a 00:25:32.029 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:25:32.029 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:25:32.287 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:25:32.287 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:25:32.287 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:25:32.287 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:25:32.287 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:25:32.287 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:25:32.287 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:25:32.545 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:25:32.545 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:25:32.545 [17/268] Linking static target lib/librte_telemetry.a 00:25:32.804 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:25:32.804 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:25:32.804 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:25:32.804 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:25:33.062 [22/268] Linking target lib/librte_log.so.24.1 00:25:33.062 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:25:33.062 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:25:33.062 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:25:33.062 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:25:33.350 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:25:33.350 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:25:33.350 [29/268] Linking target lib/librte_kvargs.so.24.1 00:25:33.350 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:25:33.624 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:25:33.624 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:25:33.624 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:25:33.624 [34/268] Linking target lib/librte_telemetry.so.24.1 00:25:33.882 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:25:33.882 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:25:33.882 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:25:33.882 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:25:33.882 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:25:33.882 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:25:33.882 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:25:34.141 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:25:34.141 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:25:34.141 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:25:34.400 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:25:34.400 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:25:34.400 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:25:34.658 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:25:34.658 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:25:34.658 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:25:34.917 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:25:34.917 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:25:34.917 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:25:34.917 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:25:34.917 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:25:35.175 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:25:35.175 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:25:35.436 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:25:35.436 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:25:35.436 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:25:35.436 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:25:35.436 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:25:35.436 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:25:35.695 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:25:35.695 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:25:35.695 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:25:35.954 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:25:36.212 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:25:36.212 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:25:36.212 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:25:36.212 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:25:36.470 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:25:36.470 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:25:36.470 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:25:36.470 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:25:36.470 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:25:36.470 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:25:36.732 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:25:36.732 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:25:36.732 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:25:36.732 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:25:36.990 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:25:36.990 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:25:36.990 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:25:36.990 [85/268] Linking static target lib/librte_ring.a 00:25:36.990 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:25:37.248 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:25:37.248 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:25:37.248 [89/268] Linking static target lib/librte_eal.a 00:25:37.506 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:25:37.506 [91/268] Linking static target lib/librte_rcu.a 00:25:37.506 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:25:37.506 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:25:37.506 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:25:37.506 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:25:37.764 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:25:37.764 [97/268] Linking static target lib/librte_mempool.a 00:25:37.764 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:25:37.764 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:25:37.764 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:25:38.022 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:25:38.022 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:25:38.022 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:25:38.280 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:25:38.280 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:25:38.280 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:25:38.280 [107/268] Linking static target lib/librte_net.a 00:25:38.280 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:25:38.280 [109/268] Linking static target lib/librte_meter.a 00:25:38.538 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:25:38.538 [111/268] Linking static target lib/librte_mbuf.a 00:25:38.538 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:25:38.796 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:25:38.796 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:25:38.796 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:25:38.796 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:25:39.055 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:25:39.055 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:25:39.313 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:25:39.572 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:25:39.572 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:25:39.843 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:25:39.843 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:25:40.136 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:25:40.136 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:25:40.136 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:25:40.136 [127/268] Linking static target lib/librte_pci.a 00:25:40.136 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:25:40.136 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:25:40.393 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:25:40.393 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:25:40.393 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:25:40.652 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:25:40.652 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:25:40.652 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:25:40.652 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:25:40.652 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:25:40.652 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:25:40.652 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:25:40.911 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:25:40.911 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:25:40.911 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:25:40.911 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:25:40.911 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:25:40.911 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:25:40.911 [146/268] Linking static target lib/librte_cmdline.a 00:25:40.911 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:25:41.171 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:25:41.171 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:25:41.171 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:25:41.738 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:25:41.738 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:25:41.738 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:25:41.738 [154/268] Linking static target lib/librte_timer.a 00:25:41.997 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:25:41.997 [156/268] Linking static target lib/librte_ethdev.a 00:25:41.997 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:25:41.997 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:25:42.255 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:25:42.255 [160/268] Linking static target lib/librte_compressdev.a 00:25:42.255 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:25:42.513 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:25:42.513 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:25:42.513 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:25:42.513 [165/268] Linking static target lib/librte_hash.a 00:25:42.770 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:25:42.770 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:25:42.770 [168/268] Linking static target lib/librte_dmadev.a 00:25:42.770 [169/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:25:43.029 [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:25:43.029 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:25:43.029 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:25:43.287 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:25:43.287 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:43.544 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:25:43.544 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:25:43.544 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:25:43.544 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:25:43.802 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:25:43.802 [180/268] Linking static target lib/librte_cryptodev.a 00:25:43.802 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:25:43.802 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:25:44.060 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:44.060 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:25:44.060 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:25:44.060 [186/268] Linking static target lib/librte_power.a 00:25:44.317 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:25:44.317 [188/268] Linking static target lib/librte_reorder.a 00:25:44.317 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:25:44.882 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:25:44.882 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:25:44.882 [192/268] Linking static target lib/librte_security.a 00:25:44.882 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:25:45.140 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:25:45.399 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:25:45.656 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:25:45.656 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:25:45.913 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:25:45.913 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:25:46.171 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:25:46.171 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:25:46.738 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:46.738 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:25:46.738 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:25:46.738 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:25:46.738 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:25:46.738 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:25:46.738 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:25:46.738 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:25:47.322 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:25:47.322 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:25:47.322 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:25:47.322 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:25:47.322 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:25:47.322 [215/268] Linking static target drivers/librte_bus_pci.a 00:25:47.581 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:25:47.581 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:25:47.581 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:25:47.581 [219/268] Linking static target drivers/librte_bus_vdev.a 00:25:47.581 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:25:47.581 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:25:47.839 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:25:47.839 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:25:47.839 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:25:47.839 [225/268] Linking static target drivers/librte_mempool_ring.a 00:25:47.839 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:48.097 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:25:48.664 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:25:50.637 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:25:50.637 [230/268] Linking target lib/librte_eal.so.24.1 00:25:50.895 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:25:50.895 [232/268] Linking target lib/librte_meter.so.24.1 00:25:50.895 [233/268] Linking target lib/librte_ring.so.24.1 00:25:50.895 [234/268] Linking target lib/librte_timer.so.24.1 00:25:50.895 [235/268] Linking target lib/librte_dmadev.so.24.1 00:25:50.895 [236/268] Linking target lib/librte_pci.so.24.1 00:25:50.895 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:25:50.895 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:25:50.895 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:25:50.895 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:25:51.153 [241/268] Linking target lib/librte_mempool.so.24.1 00:25:51.153 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:25:51.153 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:25:51.153 [244/268] Linking target lib/librte_rcu.so.24.1 00:25:51.153 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:25:51.153 [246/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:51.153 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:25:51.153 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:25:51.153 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:25:51.153 [250/268] Linking target lib/librte_mbuf.so.24.1 00:25:51.412 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:25:51.412 [252/268] Linking target lib/librte_compressdev.so.24.1 00:25:51.412 [253/268] Linking target lib/librte_net.so.24.1 00:25:51.412 [254/268] Linking target lib/librte_reorder.so.24.1 00:25:51.412 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:25:51.670 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:25:51.670 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:25:51.670 [258/268] Linking target lib/librte_cmdline.so.24.1 00:25:51.670 [259/268] Linking target lib/librte_hash.so.24.1 00:25:51.670 [260/268] Linking target lib/librte_ethdev.so.24.1 00:25:51.670 [261/268] Linking target lib/librte_security.so.24.1 00:25:51.929 [262/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:25:51.929 [263/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:25:51.929 [264/268] Linking target lib/librte_power.so.24.1 00:25:53.839 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:25:53.839 [266/268] Linking static target lib/librte_vhost.a 00:25:55.743 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:25:55.743 [268/268] Linking target lib/librte_vhost.so.24.1 00:25:55.743 INFO: autodetecting backend as ninja 00:25:55.743 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:26:17.787 CC lib/ut/ut.o 00:26:17.787 CC lib/ut_mock/mock.o 00:26:17.787 CC lib/log/log.o 00:26:17.787 CC lib/log/log_flags.o 00:26:17.787 CC lib/log/log_deprecated.o 00:26:17.787 LIB libspdk_ut.a 00:26:17.787 LIB libspdk_log.a 00:26:17.787 LIB libspdk_ut_mock.a 00:26:17.787 SO libspdk_ut.so.2.0 00:26:17.787 SO libspdk_ut_mock.so.6.0 00:26:17.787 SO libspdk_log.so.7.1 00:26:17.787 SYMLINK libspdk_ut_mock.so 00:26:17.787 SYMLINK libspdk_ut.so 00:26:17.787 SYMLINK libspdk_log.so 00:26:17.787 CXX lib/trace_parser/trace.o 00:26:17.787 CC lib/ioat/ioat.o 00:26:17.787 CC lib/dma/dma.o 00:26:17.787 CC lib/util/base64.o 00:26:17.787 CC lib/util/crc16.o 00:26:17.787 CC lib/util/cpuset.o 00:26:17.787 CC lib/util/bit_array.o 00:26:17.787 CC lib/util/crc32.o 00:26:17.787 CC lib/util/crc32c.o 00:26:17.787 CC lib/vfio_user/host/vfio_user_pci.o 00:26:17.787 CC lib/util/crc32_ieee.o 00:26:17.787 CC lib/vfio_user/host/vfio_user.o 00:26:17.787 CC lib/util/crc64.o 00:26:17.787 CC lib/util/dif.o 00:26:17.787 LIB libspdk_dma.a 00:26:17.787 CC lib/util/fd.o 00:26:17.787 SO libspdk_dma.so.5.0 00:26:17.787 CC lib/util/fd_group.o 00:26:17.787 CC lib/util/file.o 00:26:17.787 SYMLINK libspdk_dma.so 00:26:17.788 CC lib/util/hexlify.o 00:26:17.788 CC lib/util/iov.o 00:26:17.788 LIB libspdk_ioat.a 00:26:17.788 CC lib/util/math.o 00:26:17.788 SO libspdk_ioat.so.7.0 00:26:17.788 CC lib/util/net.o 00:26:17.788 SYMLINK libspdk_ioat.so 00:26:17.788 CC lib/util/pipe.o 00:26:17.788 LIB libspdk_vfio_user.a 00:26:17.788 SO libspdk_vfio_user.so.5.0 00:26:17.788 CC lib/util/strerror_tls.o 00:26:17.788 CC lib/util/string.o 00:26:17.788 CC lib/util/uuid.o 00:26:17.788 CC lib/util/xor.o 00:26:17.788 SYMLINK libspdk_vfio_user.so 00:26:17.788 CC lib/util/zipf.o 00:26:17.788 CC lib/util/md5.o 00:26:17.788 LIB libspdk_trace_parser.a 00:26:17.788 SO libspdk_trace_parser.so.6.0 00:26:17.788 LIB libspdk_util.a 00:26:17.788 SYMLINK libspdk_trace_parser.so 00:26:17.788 SO libspdk_util.so.10.1 00:26:17.788 SYMLINK libspdk_util.so 00:26:17.788 CC lib/json/json_parse.o 00:26:17.788 CC lib/json/json_util.o 00:26:17.788 CC lib/json/json_write.o 00:26:17.788 CC lib/conf/conf.o 00:26:17.788 CC lib/idxd/idxd.o 00:26:17.788 CC lib/idxd/idxd_user.o 00:26:17.788 CC lib/idxd/idxd_kernel.o 00:26:17.788 CC lib/rdma_utils/rdma_utils.o 00:26:17.788 CC lib/vmd/vmd.o 00:26:17.788 CC lib/env_dpdk/env.o 00:26:17.788 CC lib/env_dpdk/memory.o 00:26:17.788 CC lib/env_dpdk/pci.o 00:26:17.788 LIB libspdk_conf.a 00:26:17.788 SO libspdk_conf.so.6.0 00:26:17.788 LIB libspdk_rdma_utils.a 00:26:17.788 CC lib/env_dpdk/init.o 00:26:17.788 CC lib/env_dpdk/threads.o 00:26:17.788 LIB libspdk_json.a 00:26:17.788 SO libspdk_rdma_utils.so.1.0 00:26:17.788 SYMLINK libspdk_conf.so 00:26:17.788 CC lib/env_dpdk/pci_ioat.o 00:26:17.788 SO libspdk_json.so.6.0 00:26:17.788 SYMLINK libspdk_rdma_utils.so 00:26:17.788 CC lib/vmd/led.o 00:26:17.788 SYMLINK libspdk_json.so 00:26:17.788 CC lib/env_dpdk/pci_virtio.o 00:26:17.788 CC lib/env_dpdk/pci_vmd.o 00:26:17.788 CC lib/env_dpdk/pci_idxd.o 00:26:17.788 CC lib/rdma_provider/common.o 00:26:17.788 CC lib/rdma_provider/rdma_provider_verbs.o 00:26:17.788 CC lib/env_dpdk/pci_event.o 00:26:17.788 CC lib/env_dpdk/sigbus_handler.o 00:26:17.788 CC lib/env_dpdk/pci_dpdk.o 00:26:17.788 LIB libspdk_idxd.a 00:26:17.788 SO libspdk_idxd.so.12.1 00:26:17.788 LIB libspdk_vmd.a 00:26:17.788 CC lib/jsonrpc/jsonrpc_server.o 00:26:17.788 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:26:17.788 SO libspdk_vmd.so.6.0 00:26:17.788 SYMLINK libspdk_idxd.so 00:26:17.788 CC lib/jsonrpc/jsonrpc_client.o 00:26:17.788 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:26:17.788 LIB libspdk_rdma_provider.a 00:26:17.788 SO libspdk_rdma_provider.so.7.0 00:26:17.788 SYMLINK libspdk_vmd.so 00:26:17.788 CC lib/env_dpdk/pci_dpdk_2207.o 00:26:17.788 CC lib/env_dpdk/pci_dpdk_2211.o 00:26:17.788 SYMLINK libspdk_rdma_provider.so 00:26:17.788 LIB libspdk_jsonrpc.a 00:26:17.788 SO libspdk_jsonrpc.so.6.0 00:26:18.047 SYMLINK libspdk_jsonrpc.so 00:26:18.305 CC lib/rpc/rpc.o 00:26:18.563 LIB libspdk_rpc.a 00:26:18.563 SO libspdk_rpc.so.6.0 00:26:18.563 LIB libspdk_env_dpdk.a 00:26:18.563 SYMLINK libspdk_rpc.so 00:26:18.563 SO libspdk_env_dpdk.so.15.1 00:26:18.854 CC lib/notify/notify.o 00:26:18.854 CC lib/notify/notify_rpc.o 00:26:18.854 CC lib/trace/trace.o 00:26:18.854 CC lib/trace/trace_rpc.o 00:26:18.854 CC lib/trace/trace_flags.o 00:26:18.854 CC lib/keyring/keyring_rpc.o 00:26:18.854 CC lib/keyring/keyring.o 00:26:18.854 SYMLINK libspdk_env_dpdk.so 00:26:19.113 LIB libspdk_notify.a 00:26:19.113 SO libspdk_notify.so.6.0 00:26:19.113 SYMLINK libspdk_notify.so 00:26:19.113 LIB libspdk_keyring.a 00:26:19.113 LIB libspdk_trace.a 00:26:19.113 SO libspdk_keyring.so.2.0 00:26:19.113 SO libspdk_trace.so.11.0 00:26:19.372 SYMLINK libspdk_keyring.so 00:26:19.372 SYMLINK libspdk_trace.so 00:26:19.630 CC lib/thread/iobuf.o 00:26:19.630 CC lib/thread/thread.o 00:26:19.630 CC lib/sock/sock.o 00:26:19.630 CC lib/sock/sock_rpc.o 00:26:20.198 LIB libspdk_sock.a 00:26:20.198 SO libspdk_sock.so.10.0 00:26:20.198 SYMLINK libspdk_sock.so 00:26:20.765 CC lib/nvme/nvme_ctrlr.o 00:26:20.765 CC lib/nvme/nvme_ctrlr_cmd.o 00:26:20.765 CC lib/nvme/nvme_ns_cmd.o 00:26:20.765 CC lib/nvme/nvme_ns.o 00:26:20.765 CC lib/nvme/nvme_pcie_common.o 00:26:20.765 CC lib/nvme/nvme_fabric.o 00:26:20.765 CC lib/nvme/nvme_qpair.o 00:26:20.765 CC lib/nvme/nvme_pcie.o 00:26:20.765 CC lib/nvme/nvme.o 00:26:21.332 CC lib/nvme/nvme_quirks.o 00:26:21.591 CC lib/nvme/nvme_transport.o 00:26:21.591 CC lib/nvme/nvme_discovery.o 00:26:21.591 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:26:21.591 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:26:21.591 CC lib/nvme/nvme_tcp.o 00:26:21.591 LIB libspdk_thread.a 00:26:21.591 SO libspdk_thread.so.11.0 00:26:21.591 CC lib/nvme/nvme_opal.o 00:26:21.852 SYMLINK libspdk_thread.so 00:26:21.852 CC lib/nvme/nvme_io_msg.o 00:26:21.852 CC lib/nvme/nvme_poll_group.o 00:26:22.110 CC lib/nvme/nvme_zns.o 00:26:22.110 CC lib/nvme/nvme_stubs.o 00:26:22.370 CC lib/accel/accel.o 00:26:22.370 CC lib/accel/accel_rpc.o 00:26:22.370 CC lib/accel/accel_sw.o 00:26:22.370 CC lib/blob/blobstore.o 00:26:22.628 CC lib/nvme/nvme_auth.o 00:26:22.628 CC lib/nvme/nvme_cuse.o 00:26:22.628 CC lib/nvme/nvme_rdma.o 00:26:22.886 CC lib/init/json_config.o 00:26:23.145 CC lib/virtio/virtio.o 00:26:23.145 CC lib/fsdev/fsdev.o 00:26:23.145 CC lib/init/subsystem.o 00:26:23.403 CC lib/init/subsystem_rpc.o 00:26:23.403 CC lib/virtio/virtio_vhost_user.o 00:26:23.403 CC lib/fsdev/fsdev_io.o 00:26:23.662 CC lib/init/rpc.o 00:26:23.662 CC lib/fsdev/fsdev_rpc.o 00:26:23.662 CC lib/blob/request.o 00:26:23.662 CC lib/blob/zeroes.o 00:26:23.662 CC lib/virtio/virtio_vfio_user.o 00:26:23.662 LIB libspdk_accel.a 00:26:23.662 LIB libspdk_init.a 00:26:23.662 SO libspdk_init.so.6.0 00:26:23.662 SO libspdk_accel.so.16.0 00:26:23.921 CC lib/blob/blob_bs_dev.o 00:26:23.921 SYMLINK libspdk_accel.so 00:26:23.921 CC lib/virtio/virtio_pci.o 00:26:23.921 SYMLINK libspdk_init.so 00:26:23.921 LIB libspdk_fsdev.a 00:26:23.921 SO libspdk_fsdev.so.2.0 00:26:24.179 CC lib/bdev/bdev.o 00:26:24.179 CC lib/bdev/bdev_rpc.o 00:26:24.179 CC lib/bdev/bdev_zone.o 00:26:24.179 CC lib/bdev/part.o 00:26:24.179 CC lib/event/app.o 00:26:24.179 SYMLINK libspdk_fsdev.so 00:26:24.179 CC lib/bdev/scsi_nvme.o 00:26:24.179 LIB libspdk_virtio.a 00:26:24.179 SO libspdk_virtio.so.7.0 00:26:24.438 CC lib/event/reactor.o 00:26:24.438 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:26:24.438 CC lib/event/log_rpc.o 00:26:24.438 SYMLINK libspdk_virtio.so 00:26:24.438 CC lib/event/app_rpc.o 00:26:24.438 CC lib/event/scheduler_static.o 00:26:24.438 LIB libspdk_nvme.a 00:26:24.696 SO libspdk_nvme.so.15.0 00:26:24.954 LIB libspdk_event.a 00:26:24.954 SO libspdk_event.so.14.0 00:26:24.954 SYMLINK libspdk_event.so 00:26:25.214 LIB libspdk_fuse_dispatcher.a 00:26:25.214 SYMLINK libspdk_nvme.so 00:26:25.214 SO libspdk_fuse_dispatcher.so.1.0 00:26:25.214 SYMLINK libspdk_fuse_dispatcher.so 00:26:26.632 LIB libspdk_blob.a 00:26:26.892 SO libspdk_blob.so.11.0 00:26:26.892 SYMLINK libspdk_blob.so 00:26:27.151 CC lib/lvol/lvol.o 00:26:27.151 CC lib/blobfs/blobfs.o 00:26:27.151 CC lib/blobfs/tree.o 00:26:27.718 LIB libspdk_bdev.a 00:26:27.718 SO libspdk_bdev.so.17.0 00:26:27.977 SYMLINK libspdk_bdev.so 00:26:27.977 CC lib/scsi/dev.o 00:26:27.977 CC lib/scsi/lun.o 00:26:27.977 CC lib/scsi/scsi.o 00:26:27.977 CC lib/scsi/port.o 00:26:27.977 CC lib/nbd/nbd.o 00:26:27.977 CC lib/ftl/ftl_core.o 00:26:27.977 CC lib/nvmf/ctrlr.o 00:26:28.235 CC lib/ublk/ublk.o 00:26:28.235 CC lib/nvmf/ctrlr_discovery.o 00:26:28.235 LIB libspdk_blobfs.a 00:26:28.235 CC lib/nvmf/ctrlr_bdev.o 00:26:28.235 SO libspdk_blobfs.so.10.0 00:26:28.494 CC lib/nvmf/subsystem.o 00:26:28.494 CC lib/scsi/scsi_bdev.o 00:26:28.494 SYMLINK libspdk_blobfs.so 00:26:28.494 CC lib/ftl/ftl_init.o 00:26:28.494 LIB libspdk_lvol.a 00:26:28.494 SO libspdk_lvol.so.10.0 00:26:28.494 CC lib/nbd/nbd_rpc.o 00:26:28.752 CC lib/nvmf/nvmf.o 00:26:28.752 SYMLINK libspdk_lvol.so 00:26:28.752 CC lib/nvmf/nvmf_rpc.o 00:26:28.752 CC lib/ftl/ftl_layout.o 00:26:28.752 LIB libspdk_nbd.a 00:26:28.752 SO libspdk_nbd.so.7.0 00:26:28.752 CC lib/ftl/ftl_debug.o 00:26:29.010 SYMLINK libspdk_nbd.so 00:26:29.010 CC lib/nvmf/transport.o 00:26:29.010 CC lib/ublk/ublk_rpc.o 00:26:29.010 CC lib/scsi/scsi_pr.o 00:26:29.010 CC lib/scsi/scsi_rpc.o 00:26:29.268 CC lib/ftl/ftl_io.o 00:26:29.268 CC lib/ftl/ftl_sb.o 00:26:29.268 LIB libspdk_ublk.a 00:26:29.268 CC lib/ftl/ftl_l2p.o 00:26:29.268 SO libspdk_ublk.so.3.0 00:26:29.269 SYMLINK libspdk_ublk.so 00:26:29.269 CC lib/nvmf/tcp.o 00:26:29.527 CC lib/ftl/ftl_l2p_flat.o 00:26:29.527 CC lib/scsi/task.o 00:26:29.527 CC lib/nvmf/stubs.o 00:26:29.527 CC lib/ftl/ftl_nv_cache.o 00:26:29.786 CC lib/ftl/ftl_band.o 00:26:29.786 LIB libspdk_scsi.a 00:26:29.786 CC lib/nvmf/mdns_server.o 00:26:29.786 CC lib/nvmf/rdma.o 00:26:29.786 CC lib/nvmf/auth.o 00:26:29.786 SO libspdk_scsi.so.9.0 00:26:30.045 SYMLINK libspdk_scsi.so 00:26:30.045 CC lib/ftl/ftl_band_ops.o 00:26:30.045 CC lib/ftl/ftl_writer.o 00:26:30.045 CC lib/ftl/ftl_rq.o 00:26:30.304 CC lib/ftl/ftl_reloc.o 00:26:30.304 CC lib/ftl/ftl_l2p_cache.o 00:26:30.304 CC lib/ftl/ftl_p2l.o 00:26:30.563 CC lib/ftl/ftl_p2l_log.o 00:26:30.563 CC lib/iscsi/conn.o 00:26:30.563 CC lib/vhost/vhost.o 00:26:30.821 CC lib/vhost/vhost_rpc.o 00:26:30.821 CC lib/iscsi/init_grp.o 00:26:30.821 CC lib/iscsi/iscsi.o 00:26:30.821 CC lib/ftl/mngt/ftl_mngt.o 00:26:30.821 CC lib/vhost/vhost_scsi.o 00:26:31.080 CC lib/vhost/vhost_blk.o 00:26:31.080 CC lib/vhost/rte_vhost_user.o 00:26:31.339 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:26:31.339 CC lib/iscsi/param.o 00:26:31.598 CC lib/iscsi/portal_grp.o 00:26:31.598 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:26:31.598 CC lib/ftl/mngt/ftl_mngt_startup.o 00:26:31.598 CC lib/ftl/mngt/ftl_mngt_md.o 00:26:31.598 CC lib/ftl/mngt/ftl_mngt_misc.o 00:26:31.598 CC lib/iscsi/tgt_node.o 00:26:31.856 CC lib/iscsi/iscsi_subsystem.o 00:26:31.856 CC lib/iscsi/iscsi_rpc.o 00:26:32.117 CC lib/iscsi/task.o 00:26:32.117 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:26:32.117 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:26:32.117 CC lib/ftl/mngt/ftl_mngt_band.o 00:26:32.117 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:26:32.117 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:26:32.117 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:26:32.378 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:26:32.378 CC lib/ftl/utils/ftl_conf.o 00:26:32.378 LIB libspdk_vhost.a 00:26:32.378 CC lib/ftl/utils/ftl_md.o 00:26:32.378 CC lib/ftl/utils/ftl_mempool.o 00:26:32.378 CC lib/ftl/utils/ftl_bitmap.o 00:26:32.378 SO libspdk_vhost.so.8.0 00:26:32.636 CC lib/ftl/utils/ftl_property.o 00:26:32.636 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:26:32.636 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:26:32.636 SYMLINK libspdk_vhost.so 00:26:32.636 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:26:32.636 LIB libspdk_iscsi.a 00:26:32.636 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:26:32.636 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:26:32.636 SO libspdk_iscsi.so.8.0 00:26:32.894 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:26:32.894 LIB libspdk_nvmf.a 00:26:32.894 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:26:32.894 CC lib/ftl/upgrade/ftl_sb_v3.o 00:26:32.894 CC lib/ftl/upgrade/ftl_sb_v5.o 00:26:32.894 CC lib/ftl/nvc/ftl_nvc_dev.o 00:26:32.894 SYMLINK libspdk_iscsi.so 00:26:32.894 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:26:32.894 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:26:32.894 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:26:32.894 SO libspdk_nvmf.so.20.0 00:26:33.152 CC lib/ftl/base/ftl_base_dev.o 00:26:33.152 CC lib/ftl/base/ftl_base_bdev.o 00:26:33.152 CC lib/ftl/ftl_trace.o 00:26:33.411 SYMLINK libspdk_nvmf.so 00:26:33.411 LIB libspdk_ftl.a 00:26:33.669 SO libspdk_ftl.so.9.0 00:26:33.927 SYMLINK libspdk_ftl.so 00:26:34.493 CC module/env_dpdk/env_dpdk_rpc.o 00:26:34.493 CC module/accel/error/accel_error.o 00:26:34.493 CC module/keyring/file/keyring.o 00:26:34.493 CC module/accel/dsa/accel_dsa.o 00:26:34.493 CC module/accel/ioat/accel_ioat.o 00:26:34.493 CC module/accel/iaa/accel_iaa.o 00:26:34.493 CC module/scheduler/dynamic/scheduler_dynamic.o 00:26:34.493 CC module/sock/posix/posix.o 00:26:34.493 CC module/fsdev/aio/fsdev_aio.o 00:26:34.493 CC module/blob/bdev/blob_bdev.o 00:26:34.493 LIB libspdk_env_dpdk_rpc.a 00:26:34.751 SO libspdk_env_dpdk_rpc.so.6.0 00:26:34.751 CC module/keyring/file/keyring_rpc.o 00:26:34.751 SYMLINK libspdk_env_dpdk_rpc.so 00:26:34.751 CC module/fsdev/aio/fsdev_aio_rpc.o 00:26:34.751 CC module/accel/ioat/accel_ioat_rpc.o 00:26:34.751 CC module/accel/iaa/accel_iaa_rpc.o 00:26:34.751 CC module/accel/error/accel_error_rpc.o 00:26:34.751 LIB libspdk_scheduler_dynamic.a 00:26:34.751 SO libspdk_scheduler_dynamic.so.4.0 00:26:34.751 LIB libspdk_keyring_file.a 00:26:35.009 CC module/accel/dsa/accel_dsa_rpc.o 00:26:35.009 LIB libspdk_accel_ioat.a 00:26:35.009 SO libspdk_keyring_file.so.2.0 00:26:35.009 LIB libspdk_blob_bdev.a 00:26:35.009 SO libspdk_accel_ioat.so.6.0 00:26:35.009 LIB libspdk_accel_iaa.a 00:26:35.009 SO libspdk_blob_bdev.so.11.0 00:26:35.009 SYMLINK libspdk_scheduler_dynamic.so 00:26:35.009 SYMLINK libspdk_keyring_file.so 00:26:35.009 LIB libspdk_accel_error.a 00:26:35.009 SO libspdk_accel_iaa.so.3.0 00:26:35.009 SYMLINK libspdk_accel_ioat.so 00:26:35.009 SO libspdk_accel_error.so.2.0 00:26:35.009 CC module/fsdev/aio/linux_aio_mgr.o 00:26:35.009 SYMLINK libspdk_blob_bdev.so 00:26:35.009 LIB libspdk_accel_dsa.a 00:26:35.009 SO libspdk_accel_dsa.so.5.0 00:26:35.009 SYMLINK libspdk_accel_iaa.so 00:26:35.268 SYMLINK libspdk_accel_error.so 00:26:35.268 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:26:35.268 SYMLINK libspdk_accel_dsa.so 00:26:35.268 CC module/scheduler/gscheduler/gscheduler.o 00:26:35.268 CC module/keyring/linux/keyring.o 00:26:35.268 CC module/keyring/linux/keyring_rpc.o 00:26:35.526 LIB libspdk_scheduler_dpdk_governor.a 00:26:35.527 CC module/bdev/error/vbdev_error.o 00:26:35.527 SO libspdk_scheduler_dpdk_governor.so.4.0 00:26:35.527 LIB libspdk_scheduler_gscheduler.a 00:26:35.527 CC module/blobfs/bdev/blobfs_bdev.o 00:26:35.527 LIB libspdk_fsdev_aio.a 00:26:35.527 CC module/bdev/delay/vbdev_delay.o 00:26:35.527 CC module/bdev/gpt/gpt.o 00:26:35.527 LIB libspdk_keyring_linux.a 00:26:35.527 SO libspdk_scheduler_gscheduler.so.4.0 00:26:35.527 SO libspdk_keyring_linux.so.1.0 00:26:35.527 SO libspdk_fsdev_aio.so.1.0 00:26:35.527 LIB libspdk_sock_posix.a 00:26:35.527 SYMLINK libspdk_scheduler_dpdk_governor.so 00:26:35.527 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:26:35.527 SYMLINK libspdk_scheduler_gscheduler.so 00:26:35.527 SO libspdk_sock_posix.so.6.0 00:26:35.527 SYMLINK libspdk_keyring_linux.so 00:26:35.785 SYMLINK libspdk_fsdev_aio.so 00:26:35.785 SYMLINK libspdk_sock_posix.so 00:26:35.785 CC module/bdev/lvol/vbdev_lvol.o 00:26:35.785 CC module/bdev/gpt/vbdev_gpt.o 00:26:35.785 CC module/bdev/error/vbdev_error_rpc.o 00:26:35.785 LIB libspdk_blobfs_bdev.a 00:26:35.785 CC module/bdev/null/bdev_null.o 00:26:35.785 CC module/bdev/malloc/bdev_malloc.o 00:26:35.785 SO libspdk_blobfs_bdev.so.6.0 00:26:36.044 CC module/bdev/nvme/bdev_nvme.o 00:26:36.044 SYMLINK libspdk_blobfs_bdev.so 00:26:36.044 CC module/bdev/passthru/vbdev_passthru.o 00:26:36.044 CC module/bdev/delay/vbdev_delay_rpc.o 00:26:36.044 CC module/bdev/raid/bdev_raid.o 00:26:36.044 LIB libspdk_bdev_error.a 00:26:36.044 SO libspdk_bdev_error.so.6.0 00:26:36.044 LIB libspdk_bdev_gpt.a 00:26:36.303 SO libspdk_bdev_gpt.so.6.0 00:26:36.303 CC module/bdev/split/vbdev_split.o 00:26:36.303 SYMLINK libspdk_bdev_error.so 00:26:36.303 CC module/bdev/split/vbdev_split_rpc.o 00:26:36.303 LIB libspdk_bdev_delay.a 00:26:36.303 CC module/bdev/null/bdev_null_rpc.o 00:26:36.303 SO libspdk_bdev_delay.so.6.0 00:26:36.303 SYMLINK libspdk_bdev_gpt.so 00:26:36.303 CC module/bdev/raid/bdev_raid_rpc.o 00:26:36.303 SYMLINK libspdk_bdev_delay.so 00:26:36.303 CC module/bdev/raid/bdev_raid_sb.o 00:26:36.303 CC module/bdev/malloc/bdev_malloc_rpc.o 00:26:36.303 CC module/bdev/raid/raid0.o 00:26:36.303 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:26:36.585 LIB libspdk_bdev_null.a 00:26:36.585 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:26:36.585 LIB libspdk_bdev_split.a 00:26:36.585 SO libspdk_bdev_null.so.6.0 00:26:36.585 SO libspdk_bdev_split.so.6.0 00:26:36.585 SYMLINK libspdk_bdev_null.so 00:26:36.585 LIB libspdk_bdev_malloc.a 00:26:36.585 CC module/bdev/raid/raid1.o 00:26:36.585 SO libspdk_bdev_malloc.so.6.0 00:26:36.585 LIB libspdk_bdev_passthru.a 00:26:36.585 SYMLINK libspdk_bdev_split.so 00:26:36.890 CC module/bdev/raid/concat.o 00:26:36.890 SO libspdk_bdev_passthru.so.6.0 00:26:36.890 SYMLINK libspdk_bdev_malloc.so 00:26:36.890 CC module/bdev/nvme/bdev_nvme_rpc.o 00:26:36.890 CC module/bdev/nvme/nvme_rpc.o 00:26:36.890 SYMLINK libspdk_bdev_passthru.so 00:26:36.890 CC module/bdev/zone_block/vbdev_zone_block.o 00:26:36.890 LIB libspdk_bdev_lvol.a 00:26:36.890 CC module/bdev/xnvme/bdev_xnvme.o 00:26:36.890 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:26:37.157 SO libspdk_bdev_lvol.so.6.0 00:26:37.157 CC module/bdev/aio/bdev_aio.o 00:26:37.157 SYMLINK libspdk_bdev_lvol.so 00:26:37.157 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:26:37.157 CC module/bdev/nvme/bdev_mdns_client.o 00:26:37.157 CC module/bdev/nvme/vbdev_opal.o 00:26:37.157 CC module/bdev/nvme/vbdev_opal_rpc.o 00:26:37.418 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:26:37.418 LIB libspdk_bdev_zone_block.a 00:26:37.418 LIB libspdk_bdev_xnvme.a 00:26:37.418 SO libspdk_bdev_zone_block.so.6.0 00:26:37.418 SO libspdk_bdev_xnvme.so.3.0 00:26:37.418 CC module/bdev/aio/bdev_aio_rpc.o 00:26:37.418 LIB libspdk_bdev_raid.a 00:26:37.418 SYMLINK libspdk_bdev_zone_block.so 00:26:37.418 SYMLINK libspdk_bdev_xnvme.so 00:26:37.677 CC module/bdev/ftl/bdev_ftl.o 00:26:37.677 CC module/bdev/ftl/bdev_ftl_rpc.o 00:26:37.677 SO libspdk_bdev_raid.so.6.0 00:26:37.677 LIB libspdk_bdev_aio.a 00:26:37.677 SYMLINK libspdk_bdev_raid.so 00:26:37.677 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:26:37.677 CC module/bdev/iscsi/bdev_iscsi.o 00:26:37.677 SO libspdk_bdev_aio.so.6.0 00:26:37.677 CC module/bdev/virtio/bdev_virtio_scsi.o 00:26:37.677 CC module/bdev/virtio/bdev_virtio_blk.o 00:26:37.677 CC module/bdev/virtio/bdev_virtio_rpc.o 00:26:37.935 SYMLINK libspdk_bdev_aio.so 00:26:37.935 LIB libspdk_bdev_ftl.a 00:26:37.935 SO libspdk_bdev_ftl.so.6.0 00:26:38.194 SYMLINK libspdk_bdev_ftl.so 00:26:38.194 LIB libspdk_bdev_iscsi.a 00:26:38.451 SO libspdk_bdev_iscsi.so.6.0 00:26:38.451 LIB libspdk_bdev_virtio.a 00:26:38.451 SYMLINK libspdk_bdev_iscsi.so 00:26:38.451 SO libspdk_bdev_virtio.so.6.0 00:26:38.708 SYMLINK libspdk_bdev_virtio.so 00:26:40.083 LIB libspdk_bdev_nvme.a 00:26:40.083 SO libspdk_bdev_nvme.so.7.1 00:26:40.340 SYMLINK libspdk_bdev_nvme.so 00:26:40.906 CC module/event/subsystems/vmd/vmd_rpc.o 00:26:40.906 CC module/event/subsystems/vmd/vmd.o 00:26:40.906 CC module/event/subsystems/fsdev/fsdev.o 00:26:40.906 CC module/event/subsystems/scheduler/scheduler.o 00:26:40.906 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:26:40.906 CC module/event/subsystems/keyring/keyring.o 00:26:40.906 CC module/event/subsystems/sock/sock.o 00:26:40.906 CC module/event/subsystems/iobuf/iobuf.o 00:26:40.906 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:26:41.165 LIB libspdk_event_vhost_blk.a 00:26:41.165 LIB libspdk_event_keyring.a 00:26:41.165 SO libspdk_event_vhost_blk.so.3.0 00:26:41.165 LIB libspdk_event_vmd.a 00:26:41.165 LIB libspdk_event_sock.a 00:26:41.165 LIB libspdk_event_fsdev.a 00:26:41.165 SO libspdk_event_keyring.so.1.0 00:26:41.165 LIB libspdk_event_iobuf.a 00:26:41.165 SO libspdk_event_vmd.so.6.0 00:26:41.165 SO libspdk_event_sock.so.5.0 00:26:41.165 LIB libspdk_event_scheduler.a 00:26:41.165 SO libspdk_event_fsdev.so.1.0 00:26:41.165 SYMLINK libspdk_event_vhost_blk.so 00:26:41.165 SYMLINK libspdk_event_keyring.so 00:26:41.165 SO libspdk_event_iobuf.so.3.0 00:26:41.165 SO libspdk_event_scheduler.so.4.0 00:26:41.165 SYMLINK libspdk_event_sock.so 00:26:41.165 SYMLINK libspdk_event_vmd.so 00:26:41.165 SYMLINK libspdk_event_fsdev.so 00:26:41.165 SYMLINK libspdk_event_iobuf.so 00:26:41.165 SYMLINK libspdk_event_scheduler.so 00:26:41.734 CC module/event/subsystems/accel/accel.o 00:26:41.734 LIB libspdk_event_accel.a 00:26:41.734 SO libspdk_event_accel.so.6.0 00:26:41.992 SYMLINK libspdk_event_accel.so 00:26:42.250 CC module/event/subsystems/bdev/bdev.o 00:26:42.509 LIB libspdk_event_bdev.a 00:26:42.509 SO libspdk_event_bdev.so.6.0 00:26:42.509 SYMLINK libspdk_event_bdev.so 00:26:42.767 CC module/event/subsystems/ublk/ublk.o 00:26:42.767 CC module/event/subsystems/scsi/scsi.o 00:26:42.767 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:26:42.768 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:26:42.768 CC module/event/subsystems/nbd/nbd.o 00:26:43.026 LIB libspdk_event_ublk.a 00:26:43.026 LIB libspdk_event_nbd.a 00:26:43.026 LIB libspdk_event_scsi.a 00:26:43.026 SO libspdk_event_ublk.so.3.0 00:26:43.026 SO libspdk_event_nbd.so.6.0 00:26:43.026 SO libspdk_event_scsi.so.6.0 00:26:43.026 SYMLINK libspdk_event_nbd.so 00:26:43.026 SYMLINK libspdk_event_scsi.so 00:26:43.026 SYMLINK libspdk_event_ublk.so 00:26:43.284 LIB libspdk_event_nvmf.a 00:26:43.284 SO libspdk_event_nvmf.so.6.0 00:26:43.284 SYMLINK libspdk_event_nvmf.so 00:26:43.284 CC module/event/subsystems/iscsi/iscsi.o 00:26:43.284 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:26:43.542 LIB libspdk_event_iscsi.a 00:26:43.542 LIB libspdk_event_vhost_scsi.a 00:26:43.542 SO libspdk_event_iscsi.so.6.0 00:26:43.542 SO libspdk_event_vhost_scsi.so.3.0 00:26:43.800 SYMLINK libspdk_event_iscsi.so 00:26:43.800 SYMLINK libspdk_event_vhost_scsi.so 00:26:43.801 SO libspdk.so.6.0 00:26:43.801 SYMLINK libspdk.so 00:26:44.058 TEST_HEADER include/spdk/accel.h 00:26:44.058 TEST_HEADER include/spdk/accel_module.h 00:26:44.058 TEST_HEADER include/spdk/assert.h 00:26:44.058 TEST_HEADER include/spdk/barrier.h 00:26:44.058 CC test/rpc_client/rpc_client_test.o 00:26:44.058 TEST_HEADER include/spdk/base64.h 00:26:44.058 TEST_HEADER include/spdk/bdev.h 00:26:44.058 TEST_HEADER include/spdk/bdev_module.h 00:26:44.058 TEST_HEADER include/spdk/bdev_zone.h 00:26:44.058 TEST_HEADER include/spdk/bit_array.h 00:26:44.316 TEST_HEADER include/spdk/bit_pool.h 00:26:44.316 TEST_HEADER include/spdk/blob_bdev.h 00:26:44.316 TEST_HEADER include/spdk/blobfs_bdev.h 00:26:44.316 CXX app/trace/trace.o 00:26:44.316 TEST_HEADER include/spdk/blobfs.h 00:26:44.316 TEST_HEADER include/spdk/blob.h 00:26:44.316 TEST_HEADER include/spdk/conf.h 00:26:44.316 TEST_HEADER include/spdk/config.h 00:26:44.316 TEST_HEADER include/spdk/cpuset.h 00:26:44.316 TEST_HEADER include/spdk/crc16.h 00:26:44.316 CC examples/interrupt_tgt/interrupt_tgt.o 00:26:44.316 TEST_HEADER include/spdk/crc32.h 00:26:44.316 TEST_HEADER include/spdk/crc64.h 00:26:44.316 TEST_HEADER include/spdk/dif.h 00:26:44.316 TEST_HEADER include/spdk/dma.h 00:26:44.316 TEST_HEADER include/spdk/endian.h 00:26:44.316 TEST_HEADER include/spdk/env_dpdk.h 00:26:44.316 TEST_HEADER include/spdk/env.h 00:26:44.316 TEST_HEADER include/spdk/event.h 00:26:44.316 TEST_HEADER include/spdk/fd_group.h 00:26:44.316 TEST_HEADER include/spdk/fd.h 00:26:44.316 TEST_HEADER include/spdk/file.h 00:26:44.316 TEST_HEADER include/spdk/fsdev.h 00:26:44.316 TEST_HEADER include/spdk/fsdev_module.h 00:26:44.316 TEST_HEADER include/spdk/ftl.h 00:26:44.316 TEST_HEADER include/spdk/fuse_dispatcher.h 00:26:44.316 TEST_HEADER include/spdk/gpt_spec.h 00:26:44.316 TEST_HEADER include/spdk/hexlify.h 00:26:44.316 TEST_HEADER include/spdk/histogram_data.h 00:26:44.316 TEST_HEADER include/spdk/idxd.h 00:26:44.316 TEST_HEADER include/spdk/idxd_spec.h 00:26:44.316 TEST_HEADER include/spdk/init.h 00:26:44.316 TEST_HEADER include/spdk/ioat.h 00:26:44.316 TEST_HEADER include/spdk/ioat_spec.h 00:26:44.316 TEST_HEADER include/spdk/iscsi_spec.h 00:26:44.316 TEST_HEADER include/spdk/json.h 00:26:44.316 TEST_HEADER include/spdk/jsonrpc.h 00:26:44.316 TEST_HEADER include/spdk/keyring.h 00:26:44.316 TEST_HEADER include/spdk/keyring_module.h 00:26:44.317 TEST_HEADER include/spdk/likely.h 00:26:44.317 TEST_HEADER include/spdk/log.h 00:26:44.317 CC examples/util/zipf/zipf.o 00:26:44.317 TEST_HEADER include/spdk/lvol.h 00:26:44.317 CC examples/ioat/perf/perf.o 00:26:44.317 TEST_HEADER include/spdk/md5.h 00:26:44.317 TEST_HEADER include/spdk/memory.h 00:26:44.317 CC test/thread/poller_perf/poller_perf.o 00:26:44.317 TEST_HEADER include/spdk/mmio.h 00:26:44.317 TEST_HEADER include/spdk/nbd.h 00:26:44.317 TEST_HEADER include/spdk/net.h 00:26:44.317 TEST_HEADER include/spdk/notify.h 00:26:44.317 TEST_HEADER include/spdk/nvme.h 00:26:44.317 CC test/dma/test_dma/test_dma.o 00:26:44.317 TEST_HEADER include/spdk/nvme_intel.h 00:26:44.317 TEST_HEADER include/spdk/nvme_ocssd.h 00:26:44.317 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:26:44.317 TEST_HEADER include/spdk/nvme_spec.h 00:26:44.317 TEST_HEADER include/spdk/nvme_zns.h 00:26:44.317 TEST_HEADER include/spdk/nvmf_cmd.h 00:26:44.317 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:26:44.317 TEST_HEADER include/spdk/nvmf.h 00:26:44.317 TEST_HEADER include/spdk/nvmf_spec.h 00:26:44.317 TEST_HEADER include/spdk/nvmf_transport.h 00:26:44.317 TEST_HEADER include/spdk/opal.h 00:26:44.317 CC test/app/bdev_svc/bdev_svc.o 00:26:44.317 TEST_HEADER include/spdk/opal_spec.h 00:26:44.317 TEST_HEADER include/spdk/pci_ids.h 00:26:44.317 TEST_HEADER include/spdk/pipe.h 00:26:44.317 TEST_HEADER include/spdk/queue.h 00:26:44.317 TEST_HEADER include/spdk/reduce.h 00:26:44.317 TEST_HEADER include/spdk/rpc.h 00:26:44.317 TEST_HEADER include/spdk/scheduler.h 00:26:44.317 TEST_HEADER include/spdk/scsi.h 00:26:44.575 TEST_HEADER include/spdk/scsi_spec.h 00:26:44.575 TEST_HEADER include/spdk/sock.h 00:26:44.575 TEST_HEADER include/spdk/stdinc.h 00:26:44.575 TEST_HEADER include/spdk/string.h 00:26:44.575 TEST_HEADER include/spdk/thread.h 00:26:44.575 TEST_HEADER include/spdk/trace.h 00:26:44.575 TEST_HEADER include/spdk/trace_parser.h 00:26:44.575 TEST_HEADER include/spdk/tree.h 00:26:44.575 TEST_HEADER include/spdk/ublk.h 00:26:44.575 TEST_HEADER include/spdk/util.h 00:26:44.575 TEST_HEADER include/spdk/uuid.h 00:26:44.575 TEST_HEADER include/spdk/version.h 00:26:44.575 TEST_HEADER include/spdk/vfio_user_pci.h 00:26:44.575 CC test/env/mem_callbacks/mem_callbacks.o 00:26:44.575 TEST_HEADER include/spdk/vfio_user_spec.h 00:26:44.575 TEST_HEADER include/spdk/vhost.h 00:26:44.575 TEST_HEADER include/spdk/vmd.h 00:26:44.575 TEST_HEADER include/spdk/xor.h 00:26:44.575 LINK rpc_client_test 00:26:44.575 TEST_HEADER include/spdk/zipf.h 00:26:44.575 CXX test/cpp_headers/accel.o 00:26:44.575 LINK interrupt_tgt 00:26:44.575 LINK zipf 00:26:44.575 LINK bdev_svc 00:26:44.575 LINK poller_perf 00:26:44.835 LINK spdk_trace 00:26:44.835 LINK ioat_perf 00:26:44.835 CXX test/cpp_headers/accel_module.o 00:26:44.835 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:26:44.835 CC examples/ioat/verify/verify.o 00:26:44.835 CC test/env/vtophys/vtophys.o 00:26:45.093 CXX test/cpp_headers/assert.o 00:26:45.093 CC test/env/memory/memory_ut.o 00:26:45.093 CC test/app/histogram_perf/histogram_perf.o 00:26:45.093 CC app/trace_record/trace_record.o 00:26:45.093 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:26:45.093 LINK mem_callbacks 00:26:45.093 LINK env_dpdk_post_init 00:26:45.093 LINK test_dma 00:26:45.093 LINK vtophys 00:26:45.093 CXX test/cpp_headers/barrier.o 00:26:45.351 LINK verify 00:26:45.351 LINK histogram_perf 00:26:45.351 CXX test/cpp_headers/base64.o 00:26:45.351 LINK spdk_trace_record 00:26:45.610 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:26:45.610 CC app/nvmf_tgt/nvmf_main.o 00:26:45.610 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:26:45.610 CXX test/cpp_headers/bdev.o 00:26:45.610 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:26:45.610 CC test/env/pci/pci_ut.o 00:26:45.610 CC examples/thread/thread/thread_ex.o 00:26:45.610 CC examples/sock/hello_world/hello_sock.o 00:26:45.868 LINK nvmf_tgt 00:26:45.868 LINK nvme_fuzz 00:26:45.868 CXX test/cpp_headers/bdev_module.o 00:26:45.868 LINK thread 00:26:46.126 CC examples/vmd/lsvmd/lsvmd.o 00:26:46.126 LINK hello_sock 00:26:46.126 CC app/iscsi_tgt/iscsi_tgt.o 00:26:46.126 LINK lsvmd 00:26:46.126 CXX test/cpp_headers/bdev_zone.o 00:26:46.126 LINK vhost_fuzz 00:26:46.126 CC examples/idxd/perf/perf.o 00:26:46.384 LINK pci_ut 00:26:46.384 CC app/spdk_lspci/spdk_lspci.o 00:26:46.384 CC app/spdk_tgt/spdk_tgt.o 00:26:46.384 LINK iscsi_tgt 00:26:46.384 CXX test/cpp_headers/bit_array.o 00:26:46.642 CC app/spdk_nvme_perf/perf.o 00:26:46.642 LINK memory_ut 00:26:46.642 LINK spdk_lspci 00:26:46.642 CC examples/vmd/led/led.o 00:26:46.642 CXX test/cpp_headers/bit_pool.o 00:26:46.642 LINK spdk_tgt 00:26:46.642 LINK idxd_perf 00:26:46.642 CC test/app/jsoncat/jsoncat.o 00:26:46.642 LINK led 00:26:46.899 CXX test/cpp_headers/blob_bdev.o 00:26:46.899 LINK jsoncat 00:26:46.899 CC examples/accel/perf/accel_perf.o 00:26:47.156 CC examples/nvme/hello_world/hello_world.o 00:26:47.156 CC test/app/stub/stub.o 00:26:47.156 CC app/spdk_nvme_identify/identify.o 00:26:47.156 CC examples/blob/hello_world/hello_blob.o 00:26:47.156 CXX test/cpp_headers/blobfs_bdev.o 00:26:47.414 CC app/spdk_nvme_discover/discovery_aer.o 00:26:47.414 CC examples/fsdev/hello_world/hello_fsdev.o 00:26:47.414 LINK stub 00:26:47.414 LINK hello_world 00:26:47.414 LINK hello_blob 00:26:47.414 CXX test/cpp_headers/blobfs.o 00:26:47.671 LINK spdk_nvme_discover 00:26:47.671 LINK hello_fsdev 00:26:47.672 CXX test/cpp_headers/blob.o 00:26:47.929 LINK spdk_nvme_perf 00:26:47.929 CC examples/nvme/reconnect/reconnect.o 00:26:47.929 CC examples/nvme/nvme_manage/nvme_manage.o 00:26:47.929 LINK accel_perf 00:26:47.929 CXX test/cpp_headers/conf.o 00:26:47.929 CC examples/blob/cli/blobcli.o 00:26:47.929 CC examples/nvme/arbitration/arbitration.o 00:26:48.186 CC examples/nvme/hotplug/hotplug.o 00:26:48.186 LINK iscsi_fuzz 00:26:48.186 CC examples/nvme/cmb_copy/cmb_copy.o 00:26:48.186 CXX test/cpp_headers/config.o 00:26:48.186 CXX test/cpp_headers/cpuset.o 00:26:48.186 CC examples/nvme/abort/abort.o 00:26:48.445 LINK cmb_copy 00:26:48.445 LINK reconnect 00:26:48.445 LINK hotplug 00:26:48.445 CXX test/cpp_headers/crc16.o 00:26:48.445 LINK arbitration 00:26:48.445 CXX test/cpp_headers/crc32.o 00:26:48.445 LINK nvme_manage 00:26:48.445 CXX test/cpp_headers/crc64.o 00:26:48.703 CXX test/cpp_headers/dif.o 00:26:48.703 CXX test/cpp_headers/dma.o 00:26:48.703 CC examples/bdev/hello_world/hello_bdev.o 00:26:48.703 LINK blobcli 00:26:48.703 LINK spdk_nvme_identify 00:26:48.703 CC examples/bdev/bdevperf/bdevperf.o 00:26:48.703 CXX test/cpp_headers/endian.o 00:26:48.703 LINK abort 00:26:48.703 CXX test/cpp_headers/env_dpdk.o 00:26:48.703 CXX test/cpp_headers/env.o 00:26:49.006 CC test/event/event_perf/event_perf.o 00:26:49.006 CC test/event/reactor/reactor.o 00:26:49.006 CXX test/cpp_headers/event.o 00:26:49.006 LINK hello_bdev 00:26:49.006 CC app/spdk_top/spdk_top.o 00:26:49.006 CC test/event/reactor_perf/reactor_perf.o 00:26:49.006 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:26:49.006 LINK reactor 00:26:49.006 CC test/event/app_repeat/app_repeat.o 00:26:49.006 LINK event_perf 00:26:49.006 CC test/event/scheduler/scheduler.o 00:26:49.006 CXX test/cpp_headers/fd_group.o 00:26:49.268 CXX test/cpp_headers/fd.o 00:26:49.268 LINK reactor_perf 00:26:49.268 LINK pmr_persistence 00:26:49.268 LINK app_repeat 00:26:49.268 CXX test/cpp_headers/file.o 00:26:49.529 LINK scheduler 00:26:49.529 CC test/nvme/aer/aer.o 00:26:49.529 CC test/nvme/reset/reset.o 00:26:49.529 CXX test/cpp_headers/fsdev.o 00:26:49.529 CXX test/cpp_headers/fsdev_module.o 00:26:49.529 CC test/accel/dif/dif.o 00:26:49.788 CC app/vhost/vhost.o 00:26:49.788 CC test/blobfs/mkfs/mkfs.o 00:26:49.788 CXX test/cpp_headers/ftl.o 00:26:49.788 LINK bdevperf 00:26:49.788 LINK aer 00:26:49.788 CC test/nvme/sgl/sgl.o 00:26:49.788 LINK reset 00:26:50.046 LINK vhost 00:26:50.046 LINK mkfs 00:26:50.046 CXX test/cpp_headers/fuse_dispatcher.o 00:26:50.046 LINK spdk_top 00:26:50.046 CC test/lvol/esnap/esnap.o 00:26:50.305 LINK sgl 00:26:50.305 CXX test/cpp_headers/gpt_spec.o 00:26:50.305 CXX test/cpp_headers/hexlify.o 00:26:50.305 CC test/nvme/e2edp/nvme_dp.o 00:26:50.305 CC app/spdk_dd/spdk_dd.o 00:26:50.305 CC examples/nvmf/nvmf/nvmf.o 00:26:50.305 CXX test/cpp_headers/histogram_data.o 00:26:50.564 CC app/fio/nvme/fio_plugin.o 00:26:50.564 CC test/nvme/overhead/overhead.o 00:26:50.564 CXX test/cpp_headers/idxd.o 00:26:50.564 CC app/fio/bdev/fio_plugin.o 00:26:50.564 CC test/nvme/err_injection/err_injection.o 00:26:50.564 LINK nvme_dp 00:26:50.564 LINK dif 00:26:50.564 LINK nvmf 00:26:50.823 LINK spdk_dd 00:26:50.823 CXX test/cpp_headers/idxd_spec.o 00:26:50.823 LINK err_injection 00:26:50.823 LINK overhead 00:26:50.823 CC test/nvme/startup/startup.o 00:26:50.823 CXX test/cpp_headers/init.o 00:26:50.823 CC test/nvme/reserve/reserve.o 00:26:51.082 CC test/nvme/simple_copy/simple_copy.o 00:26:51.082 CC test/nvme/connect_stress/connect_stress.o 00:26:51.082 LINK startup 00:26:51.082 CXX test/cpp_headers/ioat.o 00:26:51.082 LINK spdk_bdev 00:26:51.082 CC test/nvme/boot_partition/boot_partition.o 00:26:51.082 CC test/bdev/bdevio/bdevio.o 00:26:51.082 LINK spdk_nvme 00:26:51.082 LINK reserve 00:26:51.341 LINK connect_stress 00:26:51.341 CXX test/cpp_headers/ioat_spec.o 00:26:51.341 LINK boot_partition 00:26:51.341 CXX test/cpp_headers/iscsi_spec.o 00:26:51.341 LINK simple_copy 00:26:51.341 CC test/nvme/compliance/nvme_compliance.o 00:26:51.341 CC test/nvme/fused_ordering/fused_ordering.o 00:26:51.601 CC test/nvme/doorbell_aers/doorbell_aers.o 00:26:51.601 CXX test/cpp_headers/json.o 00:26:51.601 CXX test/cpp_headers/jsonrpc.o 00:26:51.601 CC test/nvme/fdp/fdp.o 00:26:51.601 CC test/nvme/cuse/cuse.o 00:26:51.601 CXX test/cpp_headers/keyring.o 00:26:51.601 LINK bdevio 00:26:51.601 LINK fused_ordering 00:26:51.859 LINK doorbell_aers 00:26:51.859 CXX test/cpp_headers/keyring_module.o 00:26:51.859 CXX test/cpp_headers/likely.o 00:26:51.859 LINK nvme_compliance 00:26:51.859 CXX test/cpp_headers/log.o 00:26:51.859 CXX test/cpp_headers/lvol.o 00:26:51.859 CXX test/cpp_headers/md5.o 00:26:51.859 CXX test/cpp_headers/memory.o 00:26:51.859 CXX test/cpp_headers/mmio.o 00:26:51.859 CXX test/cpp_headers/nbd.o 00:26:51.859 LINK fdp 00:26:52.118 CXX test/cpp_headers/net.o 00:26:52.118 CXX test/cpp_headers/notify.o 00:26:52.118 CXX test/cpp_headers/nvme.o 00:26:52.118 CXX test/cpp_headers/nvme_intel.o 00:26:52.118 CXX test/cpp_headers/nvme_ocssd.o 00:26:52.118 CXX test/cpp_headers/nvme_ocssd_spec.o 00:26:52.118 CXX test/cpp_headers/nvme_spec.o 00:26:52.118 CXX test/cpp_headers/nvme_zns.o 00:26:52.118 CXX test/cpp_headers/nvmf_cmd.o 00:26:52.118 CXX test/cpp_headers/nvmf_fc_spec.o 00:26:52.377 CXX test/cpp_headers/nvmf.o 00:26:52.377 CXX test/cpp_headers/nvmf_spec.o 00:26:52.377 CXX test/cpp_headers/nvmf_transport.o 00:26:52.377 CXX test/cpp_headers/opal.o 00:26:52.377 CXX test/cpp_headers/opal_spec.o 00:26:52.377 CXX test/cpp_headers/pci_ids.o 00:26:52.377 CXX test/cpp_headers/pipe.o 00:26:52.377 CXX test/cpp_headers/queue.o 00:26:52.377 CXX test/cpp_headers/reduce.o 00:26:52.377 CXX test/cpp_headers/rpc.o 00:26:52.377 CXX test/cpp_headers/scheduler.o 00:26:52.635 CXX test/cpp_headers/scsi.o 00:26:52.635 CXX test/cpp_headers/scsi_spec.o 00:26:52.635 CXX test/cpp_headers/sock.o 00:26:52.635 CXX test/cpp_headers/stdinc.o 00:26:52.635 CXX test/cpp_headers/string.o 00:26:52.635 CXX test/cpp_headers/thread.o 00:26:52.635 CXX test/cpp_headers/trace.o 00:26:52.635 CXX test/cpp_headers/trace_parser.o 00:26:52.893 CXX test/cpp_headers/tree.o 00:26:52.893 CXX test/cpp_headers/ublk.o 00:26:52.893 CXX test/cpp_headers/util.o 00:26:52.893 CXX test/cpp_headers/uuid.o 00:26:52.893 CXX test/cpp_headers/version.o 00:26:52.893 CXX test/cpp_headers/vfio_user_pci.o 00:26:52.893 CXX test/cpp_headers/vfio_user_spec.o 00:26:52.893 CXX test/cpp_headers/vhost.o 00:26:52.893 CXX test/cpp_headers/xor.o 00:26:52.893 CXX test/cpp_headers/vmd.o 00:26:52.893 CXX test/cpp_headers/zipf.o 00:26:53.192 LINK cuse 00:26:57.385 LINK esnap 00:26:57.670 00:26:57.670 real 1m41.254s 00:26:57.670 user 9m5.868s 00:26:57.670 sys 2m8.648s 00:26:57.670 13:47:54 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:26:57.670 13:47:54 make -- common/autotest_common.sh@10 -- $ set +x 00:26:57.670 ************************************ 00:26:57.670 END TEST make 00:26:57.670 ************************************ 00:26:57.930 13:47:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:26:57.930 13:47:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:57.930 13:47:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:57.930 13:47:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:57.930 13:47:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:57.930 13:47:55 -- pm/common@44 -- $ pid=5334 00:26:57.930 13:47:55 -- pm/common@50 -- $ kill -TERM 5334 00:26:57.930 13:47:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:57.930 13:47:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:57.930 13:47:55 -- pm/common@44 -- $ pid=5335 00:26:57.930 13:47:55 -- pm/common@50 -- $ kill -TERM 5335 00:26:57.930 13:47:55 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:26:57.930 13:47:55 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:26:57.930 13:47:55 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:57.930 13:47:55 -- common/autotest_common.sh@1693 -- # lcov --version 00:26:57.930 13:47:55 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:57.930 13:47:55 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:57.930 13:47:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:57.930 13:47:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:57.930 13:47:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:57.930 13:47:55 -- scripts/common.sh@336 -- # IFS=.-: 00:26:57.930 13:47:55 -- scripts/common.sh@336 -- # read -ra ver1 00:26:57.930 13:47:55 -- scripts/common.sh@337 -- # IFS=.-: 00:26:57.930 13:47:55 -- scripts/common.sh@337 -- # read -ra ver2 00:26:57.930 13:47:55 -- scripts/common.sh@338 -- # local 'op=<' 00:26:57.930 13:47:55 -- scripts/common.sh@340 -- # ver1_l=2 00:26:57.930 13:47:55 -- scripts/common.sh@341 -- # ver2_l=1 00:26:57.930 13:47:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:57.930 13:47:55 -- scripts/common.sh@344 -- # case "$op" in 00:26:57.930 13:47:55 -- scripts/common.sh@345 -- # : 1 00:26:57.930 13:47:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:57.930 13:47:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:57.930 13:47:55 -- scripts/common.sh@365 -- # decimal 1 00:26:57.930 13:47:55 -- scripts/common.sh@353 -- # local d=1 00:26:57.930 13:47:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:57.930 13:47:55 -- scripts/common.sh@355 -- # echo 1 00:26:57.930 13:47:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:26:57.930 13:47:55 -- scripts/common.sh@366 -- # decimal 2 00:26:57.930 13:47:55 -- scripts/common.sh@353 -- # local d=2 00:26:57.930 13:47:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:57.930 13:47:55 -- scripts/common.sh@355 -- # echo 2 00:26:57.930 13:47:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:26:57.930 13:47:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:57.930 13:47:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:57.930 13:47:55 -- scripts/common.sh@368 -- # return 0 00:26:57.930 13:47:55 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:57.930 13:47:55 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:57.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.930 --rc genhtml_branch_coverage=1 00:26:57.930 --rc genhtml_function_coverage=1 00:26:57.930 --rc genhtml_legend=1 00:26:57.930 --rc geninfo_all_blocks=1 00:26:57.930 --rc geninfo_unexecuted_blocks=1 00:26:57.930 00:26:57.930 ' 00:26:57.930 13:47:55 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:57.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.930 --rc genhtml_branch_coverage=1 00:26:57.930 --rc genhtml_function_coverage=1 00:26:57.930 --rc genhtml_legend=1 00:26:57.930 --rc geninfo_all_blocks=1 00:26:57.930 --rc geninfo_unexecuted_blocks=1 00:26:57.930 00:26:57.930 ' 00:26:57.930 13:47:55 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:57.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.930 --rc genhtml_branch_coverage=1 00:26:57.930 --rc genhtml_function_coverage=1 00:26:57.930 --rc genhtml_legend=1 00:26:57.930 --rc geninfo_all_blocks=1 00:26:57.930 --rc geninfo_unexecuted_blocks=1 00:26:57.930 00:26:57.930 ' 00:26:57.930 13:47:55 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:57.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.930 --rc genhtml_branch_coverage=1 00:26:57.930 --rc genhtml_function_coverage=1 00:26:57.930 --rc genhtml_legend=1 00:26:57.931 --rc geninfo_all_blocks=1 00:26:57.931 --rc geninfo_unexecuted_blocks=1 00:26:57.931 00:26:57.931 ' 00:26:57.931 13:47:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:57.931 13:47:55 -- nvmf/common.sh@7 -- # uname -s 00:26:57.931 13:47:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.931 13:47:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.931 13:47:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.931 13:47:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.931 13:47:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.931 13:47:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.931 13:47:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.931 13:47:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.931 13:47:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.931 13:47:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.931 13:47:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97392525-1e58-4e74-9818-6fb5e8322d2f 00:26:57.931 13:47:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=97392525-1e58-4e74-9818-6fb5e8322d2f 00:26:57.931 13:47:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.931 13:47:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:57.931 13:47:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:57.931 13:47:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.931 13:47:55 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:57.931 13:47:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:26:57.931 13:47:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.931 13:47:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.931 13:47:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.931 13:47:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.931 13:47:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.931 13:47:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.931 13:47:55 -- paths/export.sh@5 -- # export PATH 00:26:57.931 13:47:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.931 13:47:55 -- nvmf/common.sh@51 -- # : 0 00:26:57.931 13:47:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:57.931 13:47:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:57.931 13:47:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:57.931 13:47:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.931 13:47:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.931 13:47:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:57.931 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:57.931 13:47:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:57.931 13:47:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:57.931 13:47:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:57.931 13:47:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:26:58.191 13:47:55 -- spdk/autotest.sh@32 -- # uname -s 00:26:58.191 13:47:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:26:58.191 13:47:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:26:58.191 13:47:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:26:58.191 13:47:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:26:58.191 13:47:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:26:58.191 13:47:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:26:58.191 13:47:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:26:58.191 13:47:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:26:58.191 13:47:55 -- spdk/autotest.sh@48 -- # udevadm_pid=54943 00:26:58.191 13:47:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:26:58.191 13:47:55 -- pm/common@17 -- # local monitor 00:26:58.191 13:47:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:26:58.191 13:47:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:26:58.191 13:47:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:26:58.191 13:47:55 -- pm/common@25 -- # sleep 1 00:26:58.191 13:47:55 -- pm/common@21 -- # date +%s 00:26:58.191 13:47:55 -- pm/common@21 -- # date +%s 00:26:58.191 13:47:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732110475 00:26:58.191 13:47:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732110475 00:26:58.191 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732110475_collect-cpu-load.pm.log 00:26:58.191 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732110475_collect-vmstat.pm.log 00:26:59.128 13:47:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:26:59.128 13:47:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:26:59.128 13:47:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:59.128 13:47:56 -- common/autotest_common.sh@10 -- # set +x 00:26:59.128 13:47:56 -- spdk/autotest.sh@59 -- # create_test_list 00:26:59.128 13:47:56 -- common/autotest_common.sh@752 -- # xtrace_disable 00:26:59.128 13:47:56 -- common/autotest_common.sh@10 -- # set +x 00:26:59.128 13:47:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:26:59.128 13:47:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:26:59.128 13:47:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:26:59.128 13:47:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:26:59.128 13:47:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:26:59.128 13:47:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:26:59.128 13:47:56 -- common/autotest_common.sh@1457 -- # uname 00:26:59.128 13:47:56 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:26:59.128 13:47:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:26:59.128 13:47:56 -- common/autotest_common.sh@1477 -- # uname 00:26:59.128 13:47:56 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:26:59.128 13:47:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:26:59.128 13:47:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:26:59.387 lcov: LCOV version 1.15 00:26:59.387 13:47:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:27:21.332 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:27:21.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:27:39.594 13:48:34 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:27:39.595 13:48:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:39.595 13:48:34 -- common/autotest_common.sh@10 -- # set +x 00:27:39.595 13:48:34 -- spdk/autotest.sh@78 -- # rm -f 00:27:39.595 13:48:34 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:39.595 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:39.595 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:27:39.595 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:27:39.595 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:27:39.595 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:27:39.595 13:48:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:27:39.595 13:48:35 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:27:39.595 13:48:35 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:27:39.595 13:48:35 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:27:39.595 13:48:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:39.595 13:48:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:27:39.595 13:48:35 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:39.595 13:48:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:39.595 13:48:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:39.595 13:48:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:39.595 13:48:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:27:39.595 13:48:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:27:39.595 13:48:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:39.595 13:48:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:39.595 13:48:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:39.595 13:48:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:27:39.595 13:48:35 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:27:39.595 13:48:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:27:39.595 13:48:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:39.595 13:48:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:39.595 13:48:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:27:39.595 13:48:35 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:27:39.595 13:48:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:27:39.595 13:48:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:39.595 13:48:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:39.595 13:48:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:27:39.595 13:48:35 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:27:39.595 13:48:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:27:39.595 13:48:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:39.595 13:48:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:39.595 13:48:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:27:39.595 13:48:35 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:27:39.595 13:48:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:27:39.595 13:48:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:39.595 13:48:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:39.595 13:48:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:27:39.595 13:48:35 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:27:39.595 13:48:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:27:39.595 13:48:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:39.595 13:48:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:27:39.595 13:48:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:39.595 13:48:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:39.595 13:48:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:27:39.595 13:48:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:27:39.595 13:48:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:27:39.595 No valid GPT data, bailing 00:27:39.595 13:48:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:39.595 13:48:35 -- scripts/common.sh@394 -- # pt= 00:27:39.595 13:48:35 -- scripts/common.sh@395 -- # return 1 00:27:39.595 13:48:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:27:39.595 1+0 records in 00:27:39.595 1+0 records out 00:27:39.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119223 s, 88.0 MB/s 00:27:39.595 13:48:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:39.595 13:48:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:39.595 13:48:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:27:39.595 13:48:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:27:39.595 13:48:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:27:39.595 No valid GPT data, bailing 00:27:39.595 13:48:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:39.595 13:48:35 -- scripts/common.sh@394 -- # pt= 00:27:39.595 13:48:35 -- scripts/common.sh@395 -- # return 1 00:27:39.595 13:48:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:27:39.595 1+0 records in 00:27:39.595 1+0 records out 00:27:39.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470673 s, 223 MB/s 00:27:39.595 13:48:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:39.595 13:48:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:39.595 13:48:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:27:39.595 13:48:35 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:27:39.595 13:48:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:27:39.595 No valid GPT data, bailing 00:27:39.595 13:48:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:27:39.595 13:48:35 -- scripts/common.sh@394 -- # pt= 00:27:39.595 13:48:35 -- scripts/common.sh@395 -- # return 1 00:27:39.595 13:48:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:27:39.595 1+0 records in 00:27:39.595 1+0 records out 00:27:39.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00468784 s, 224 MB/s 00:27:39.595 13:48:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:39.595 13:48:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:39.595 13:48:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:27:39.595 13:48:35 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:27:39.595 13:48:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:27:39.595 No valid GPT data, bailing 00:27:39.595 13:48:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:27:39.595 13:48:36 -- scripts/common.sh@394 -- # pt= 00:27:39.595 13:48:36 -- scripts/common.sh@395 -- # return 1 00:27:39.595 13:48:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:27:39.595 1+0 records in 00:27:39.595 1+0 records out 00:27:39.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00560356 s, 187 MB/s 00:27:39.595 13:48:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:39.595 13:48:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:39.595 13:48:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:27:39.595 13:48:36 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:27:39.595 13:48:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:27:39.595 No valid GPT data, bailing 00:27:39.595 13:48:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:27:39.595 13:48:36 -- scripts/common.sh@394 -- # pt= 00:27:39.595 13:48:36 -- scripts/common.sh@395 -- # return 1 00:27:39.595 13:48:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:27:39.595 1+0 records in 00:27:39.595 1+0 records out 00:27:39.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534713 s, 196 MB/s 00:27:39.595 13:48:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:39.595 13:48:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:39.595 13:48:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:27:39.595 13:48:36 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:27:39.595 13:48:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:27:39.595 No valid GPT data, bailing 00:27:39.595 13:48:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:27:39.595 13:48:36 -- scripts/common.sh@394 -- # pt= 00:27:39.595 13:48:36 -- scripts/common.sh@395 -- # return 1 00:27:39.595 13:48:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:27:39.595 1+0 records in 00:27:39.595 1+0 records out 00:27:39.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00569321 s, 184 MB/s 00:27:39.595 13:48:36 -- spdk/autotest.sh@105 -- # sync 00:27:39.595 13:48:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:27:39.595 13:48:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:27:39.595 13:48:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:27:41.499 13:48:38 -- spdk/autotest.sh@111 -- # uname -s 00:27:41.499 13:48:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:27:41.499 13:48:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:27:41.499 13:48:38 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:27:42.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:42.645 Hugepages 00:27:42.645 node hugesize free / total 00:27:42.645 node0 1048576kB 0 / 0 00:27:42.645 node0 2048kB 0 / 0 00:27:42.645 00:27:42.645 Type BDF Vendor Device NUMA Driver Device Block devices 00:27:42.645 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:27:42.645 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:27:42.645 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:27:42.909 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:27:42.909 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:27:42.909 13:48:40 -- spdk/autotest.sh@117 -- # uname -s 00:27:42.909 13:48:40 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:27:42.909 13:48:40 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:27:42.909 13:48:40 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:43.475 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:44.409 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:44.409 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:44.409 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:27:44.409 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:27:44.409 13:48:41 -- common/autotest_common.sh@1517 -- # sleep 1 00:27:45.359 13:48:42 -- common/autotest_common.sh@1518 -- # bdfs=() 00:27:45.359 13:48:42 -- common/autotest_common.sh@1518 -- # local bdfs 00:27:45.359 13:48:42 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:27:45.359 13:48:42 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:27:45.359 13:48:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:45.359 13:48:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:45.359 13:48:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:45.359 13:48:42 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:45.359 13:48:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:45.617 13:48:42 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:27:45.617 13:48:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:27:45.617 13:48:42 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:45.876 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:46.134 Waiting for block devices as requested 00:27:46.134 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:46.134 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:46.393 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:46.393 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:51.655 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:51.655 13:48:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:27:51.655 13:48:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:27:51.655 13:48:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:27:51.655 13:48:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:27:51.655 13:48:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:27:51.655 13:48:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:27:51.656 13:48:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:27:51.656 13:48:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:27:51.656 13:48:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:27:51.656 13:48:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:27:51.656 13:48:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:27:51.656 13:48:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1543 -- # continue 00:27:51.656 13:48:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:27:51.656 13:48:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:27:51.656 13:48:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:27:51.656 13:48:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:27:51.656 13:48:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:27:51.656 13:48:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:27:51.656 13:48:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:27:51.656 13:48:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:27:51.656 13:48:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:27:51.656 13:48:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:27:51.656 13:48:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:27:51.656 13:48:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1543 -- # continue 00:27:51.656 13:48:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:27:51.656 13:48:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:27:51.656 13:48:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:27:51.656 13:48:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:27:51.656 13:48:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:27:51.656 13:48:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:27:51.656 13:48:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:27:51.656 13:48:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:27:51.656 13:48:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:27:51.656 13:48:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:27:51.656 13:48:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:27:51.656 13:48:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1543 -- # continue 00:27:51.656 13:48:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:27:51.656 13:48:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:27:51.656 13:48:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:27:51.656 13:48:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:27:51.656 13:48:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:27:51.656 13:48:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:27:51.656 13:48:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:27:51.656 13:48:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:27:51.656 13:48:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:27:51.656 13:48:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:27:51.656 13:48:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:27:51.656 13:48:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:27:51.656 13:48:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:27:51.656 13:48:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:27:51.656 13:48:48 -- common/autotest_common.sh@1543 -- # continue 00:27:51.656 13:48:48 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:27:51.656 13:48:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:51.656 13:48:48 -- common/autotest_common.sh@10 -- # set +x 00:27:51.656 13:48:48 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:27:51.656 13:48:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.656 13:48:48 -- common/autotest_common.sh@10 -- # set +x 00:27:51.656 13:48:48 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:52.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:53.170 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:53.170 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:27:53.170 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:53.170 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:27:53.170 13:48:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:27:53.170 13:48:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:53.170 13:48:50 -- common/autotest_common.sh@10 -- # set +x 00:27:53.170 13:48:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:27:53.170 13:48:50 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:27:53.170 13:48:50 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:27:53.170 13:48:50 -- common/autotest_common.sh@1563 -- # bdfs=() 00:27:53.170 13:48:50 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:27:53.170 13:48:50 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:27:53.170 13:48:50 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:27:53.170 13:48:50 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:27:53.170 13:48:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:53.170 13:48:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:53.170 13:48:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:53.170 13:48:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:53.170 13:48:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:53.429 13:48:50 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:27:53.429 13:48:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:27:53.429 13:48:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:27:53.429 13:48:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:27:53.429 13:48:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:27:53.429 13:48:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:27:53.429 13:48:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:27:53.429 13:48:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:27:53.429 13:48:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:27:53.429 13:48:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:27:53.429 13:48:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:27:53.429 13:48:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:27:53.429 13:48:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:27:53.429 13:48:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:27:53.429 13:48:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:27:53.429 13:48:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:27:53.429 13:48:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:27:53.429 13:48:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:27:53.430 13:48:50 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:27:53.430 13:48:50 -- common/autotest_common.sh@1572 -- # return 0 00:27:53.430 13:48:50 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:27:53.430 13:48:50 -- common/autotest_common.sh@1580 -- # return 0 00:27:53.430 13:48:50 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:27:53.430 13:48:50 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:27:53.430 13:48:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:27:53.430 13:48:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:27:53.430 13:48:50 -- spdk/autotest.sh@149 -- # timing_enter lib 00:27:53.430 13:48:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.430 13:48:50 -- common/autotest_common.sh@10 -- # set +x 00:27:53.430 13:48:50 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:27:53.430 13:48:50 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:27:53.430 13:48:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:53.430 13:48:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:53.430 13:48:50 -- common/autotest_common.sh@10 -- # set +x 00:27:53.430 ************************************ 00:27:53.430 START TEST env 00:27:53.430 ************************************ 00:27:53.430 13:48:50 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:27:53.430 * Looking for test storage... 00:27:53.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:27:53.430 13:48:50 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:53.430 13:48:50 env -- common/autotest_common.sh@1693 -- # lcov --version 00:27:53.430 13:48:50 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:53.689 13:48:50 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:53.689 13:48:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:53.689 13:48:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:53.689 13:48:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:53.689 13:48:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:27:53.689 13:48:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:27:53.689 13:48:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:27:53.689 13:48:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:27:53.689 13:48:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:27:53.689 13:48:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:27:53.689 13:48:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:27:53.689 13:48:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:53.689 13:48:50 env -- scripts/common.sh@344 -- # case "$op" in 00:27:53.689 13:48:50 env -- scripts/common.sh@345 -- # : 1 00:27:53.690 13:48:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:53.690 13:48:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:53.690 13:48:50 env -- scripts/common.sh@365 -- # decimal 1 00:27:53.690 13:48:50 env -- scripts/common.sh@353 -- # local d=1 00:27:53.690 13:48:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:53.690 13:48:50 env -- scripts/common.sh@355 -- # echo 1 00:27:53.690 13:48:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:27:53.690 13:48:50 env -- scripts/common.sh@366 -- # decimal 2 00:27:53.690 13:48:50 env -- scripts/common.sh@353 -- # local d=2 00:27:53.690 13:48:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:53.690 13:48:50 env -- scripts/common.sh@355 -- # echo 2 00:27:53.690 13:48:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:27:53.690 13:48:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:53.690 13:48:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:53.690 13:48:50 env -- scripts/common.sh@368 -- # return 0 00:27:53.690 13:48:50 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:53.690 13:48:50 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:53.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.690 --rc genhtml_branch_coverage=1 00:27:53.690 --rc genhtml_function_coverage=1 00:27:53.690 --rc genhtml_legend=1 00:27:53.690 --rc geninfo_all_blocks=1 00:27:53.690 --rc geninfo_unexecuted_blocks=1 00:27:53.690 00:27:53.690 ' 00:27:53.690 13:48:50 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:53.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.690 --rc genhtml_branch_coverage=1 00:27:53.690 --rc genhtml_function_coverage=1 00:27:53.690 --rc genhtml_legend=1 00:27:53.690 --rc geninfo_all_blocks=1 00:27:53.690 --rc geninfo_unexecuted_blocks=1 00:27:53.690 00:27:53.690 ' 00:27:53.690 13:48:50 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:53.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.690 --rc genhtml_branch_coverage=1 00:27:53.690 --rc genhtml_function_coverage=1 00:27:53.690 --rc genhtml_legend=1 00:27:53.690 --rc geninfo_all_blocks=1 00:27:53.690 --rc geninfo_unexecuted_blocks=1 00:27:53.690 00:27:53.690 ' 00:27:53.690 13:48:50 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:53.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.690 --rc genhtml_branch_coverage=1 00:27:53.690 --rc genhtml_function_coverage=1 00:27:53.690 --rc genhtml_legend=1 00:27:53.690 --rc geninfo_all_blocks=1 00:27:53.690 --rc geninfo_unexecuted_blocks=1 00:27:53.690 00:27:53.690 ' 00:27:53.690 13:48:50 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:27:53.690 13:48:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:53.690 13:48:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:53.690 13:48:50 env -- common/autotest_common.sh@10 -- # set +x 00:27:53.690 ************************************ 00:27:53.690 START TEST env_memory 00:27:53.690 ************************************ 00:27:53.690 13:48:50 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:27:53.690 00:27:53.690 00:27:53.690 CUnit - A unit testing framework for C - Version 2.1-3 00:27:53.690 http://cunit.sourceforge.net/ 00:27:53.690 00:27:53.690 00:27:53.690 Suite: memory 00:27:53.690 Test: alloc and free memory map ...[2024-11-20 13:48:50.877612] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:27:53.690 passed 00:27:53.690 Test: mem map translation ...[2024-11-20 13:48:50.950416] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:27:53.690 [2024-11-20 13:48:50.950528] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:27:53.690 [2024-11-20 13:48:50.950633] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:27:53.690 [2024-11-20 13:48:50.950692] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:27:53.950 passed 00:27:53.950 Test: mem map registration ...[2024-11-20 13:48:51.063791] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:27:53.950 [2024-11-20 13:48:51.063919] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:27:53.950 passed 00:27:53.950 Test: mem map adjacent registrations ...passed 00:27:53.950 00:27:53.950 Run Summary: Type Total Ran Passed Failed Inactive 00:27:53.950 suites 1 1 n/a 0 0 00:27:53.950 tests 4 4 4 0 0 00:27:53.950 asserts 152 152 152 0 n/a 00:27:53.950 00:27:53.950 Elapsed time = 0.382 seconds 00:27:53.950 00:27:53.950 real 0m0.432s 00:27:53.950 user 0m0.390s 00:27:53.950 sys 0m0.034s 00:27:53.950 13:48:51 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:53.950 13:48:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:27:53.950 ************************************ 00:27:53.950 END TEST env_memory 00:27:53.950 ************************************ 00:27:53.950 13:48:51 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:27:53.950 13:48:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:53.950 13:48:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:53.950 13:48:51 env -- common/autotest_common.sh@10 -- # set +x 00:27:54.210 ************************************ 00:27:54.210 START TEST env_vtophys 00:27:54.210 ************************************ 00:27:54.210 13:48:51 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:27:54.210 EAL: lib.eal log level changed from notice to debug 00:27:54.210 EAL: Detected lcore 0 as core 0 on socket 0 00:27:54.210 EAL: Detected lcore 1 as core 0 on socket 0 00:27:54.210 EAL: Detected lcore 2 as core 0 on socket 0 00:27:54.210 EAL: Detected lcore 3 as core 0 on socket 0 00:27:54.210 EAL: Detected lcore 4 as core 0 on socket 0 00:27:54.210 EAL: Detected lcore 5 as core 0 on socket 0 00:27:54.210 EAL: Detected lcore 6 as core 0 on socket 0 00:27:54.210 EAL: Detected lcore 7 as core 0 on socket 0 00:27:54.210 EAL: Detected lcore 8 as core 0 on socket 0 00:27:54.210 EAL: Detected lcore 9 as core 0 on socket 0 00:27:54.210 EAL: Maximum logical cores by configuration: 128 00:27:54.210 EAL: Detected CPU lcores: 10 00:27:54.210 EAL: Detected NUMA nodes: 1 00:27:54.210 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:27:54.210 EAL: Detected shared linkage of DPDK 00:27:54.210 EAL: No shared files mode enabled, IPC will be disabled 00:27:54.210 EAL: Selected IOVA mode 'PA' 00:27:54.210 EAL: Probing VFIO support... 00:27:54.210 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:27:54.210 EAL: VFIO modules not loaded, skipping VFIO support... 00:27:54.210 EAL: Ask a virtual area of 0x2e000 bytes 00:27:54.210 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:27:54.210 EAL: Setting up physically contiguous memory... 00:27:54.210 EAL: Setting maximum number of open files to 524288 00:27:54.210 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:27:54.210 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:27:54.210 EAL: Ask a virtual area of 0x61000 bytes 00:27:54.210 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:27:54.210 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:54.210 EAL: Ask a virtual area of 0x400000000 bytes 00:27:54.210 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:27:54.210 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:27:54.210 EAL: Ask a virtual area of 0x61000 bytes 00:27:54.210 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:27:54.210 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:54.210 EAL: Ask a virtual area of 0x400000000 bytes 00:27:54.210 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:27:54.210 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:27:54.210 EAL: Ask a virtual area of 0x61000 bytes 00:27:54.210 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:27:54.210 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:54.210 EAL: Ask a virtual area of 0x400000000 bytes 00:27:54.210 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:27:54.210 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:27:54.210 EAL: Ask a virtual area of 0x61000 bytes 00:27:54.210 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:27:54.210 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:54.210 EAL: Ask a virtual area of 0x400000000 bytes 00:27:54.210 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:27:54.210 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:27:54.210 EAL: Hugepages will be freed exactly as allocated. 00:27:54.210 EAL: No shared files mode enabled, IPC is disabled 00:27:54.210 EAL: No shared files mode enabled, IPC is disabled 00:27:54.210 EAL: TSC frequency is ~2100000 KHz 00:27:54.210 EAL: Main lcore 0 is ready (tid=7fe67eacfa40;cpuset=[0]) 00:27:54.210 EAL: Trying to obtain current memory policy. 00:27:54.210 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:54.210 EAL: Restoring previous memory policy: 0 00:27:54.210 EAL: request: mp_malloc_sync 00:27:54.210 EAL: No shared files mode enabled, IPC is disabled 00:27:54.210 EAL: Heap on socket 0 was expanded by 2MB 00:27:54.210 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:27:54.210 EAL: No PCI address specified using 'addr=' in: bus=pci 00:27:54.210 EAL: Mem event callback 'spdk:(nil)' registered 00:27:54.210 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:27:54.468 00:27:54.468 00:27:54.468 CUnit - A unit testing framework for C - Version 2.1-3 00:27:54.468 http://cunit.sourceforge.net/ 00:27:54.468 00:27:54.468 00:27:54.468 Suite: components_suite 00:27:55.036 Test: vtophys_malloc_test ...passed 00:27:55.036 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:27:55.036 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:55.036 EAL: Restoring previous memory policy: 4 00:27:55.036 EAL: Calling mem event callback 'spdk:(nil)' 00:27:55.036 EAL: request: mp_malloc_sync 00:27:55.036 EAL: No shared files mode enabled, IPC is disabled 00:27:55.036 EAL: Heap on socket 0 was expanded by 4MB 00:27:55.036 EAL: Calling mem event callback 'spdk:(nil)' 00:27:55.036 EAL: request: mp_malloc_sync 00:27:55.036 EAL: No shared files mode enabled, IPC is disabled 00:27:55.036 EAL: Heap on socket 0 was shrunk by 4MB 00:27:55.036 EAL: Trying to obtain current memory policy. 00:27:55.036 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:55.036 EAL: Restoring previous memory policy: 4 00:27:55.036 EAL: Calling mem event callback 'spdk:(nil)' 00:27:55.036 EAL: request: mp_malloc_sync 00:27:55.036 EAL: No shared files mode enabled, IPC is disabled 00:27:55.036 EAL: Heap on socket 0 was expanded by 6MB 00:27:55.036 EAL: Calling mem event callback 'spdk:(nil)' 00:27:55.036 EAL: request: mp_malloc_sync 00:27:55.036 EAL: No shared files mode enabled, IPC is disabled 00:27:55.036 EAL: Heap on socket 0 was shrunk by 6MB 00:27:55.036 EAL: Trying to obtain current memory policy. 00:27:55.036 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:55.036 EAL: Restoring previous memory policy: 4 00:27:55.036 EAL: Calling mem event callback 'spdk:(nil)' 00:27:55.036 EAL: request: mp_malloc_sync 00:27:55.036 EAL: No shared files mode enabled, IPC is disabled 00:27:55.036 EAL: Heap on socket 0 was expanded by 10MB 00:27:55.036 EAL: Calling mem event callback 'spdk:(nil)' 00:27:55.036 EAL: request: mp_malloc_sync 00:27:55.036 EAL: No shared files mode enabled, IPC is disabled 00:27:55.036 EAL: Heap on socket 0 was shrunk by 10MB 00:27:55.036 EAL: Trying to obtain current memory policy. 00:27:55.036 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:55.036 EAL: Restoring previous memory policy: 4 00:27:55.036 EAL: Calling mem event callback 'spdk:(nil)' 00:27:55.036 EAL: request: mp_malloc_sync 00:27:55.036 EAL: No shared files mode enabled, IPC is disabled 00:27:55.036 EAL: Heap on socket 0 was expanded by 18MB 00:27:55.036 EAL: Calling mem event callback 'spdk:(nil)' 00:27:55.036 EAL: request: mp_malloc_sync 00:27:55.036 EAL: No shared files mode enabled, IPC is disabled 00:27:55.036 EAL: Heap on socket 0 was shrunk by 18MB 00:27:55.036 EAL: Trying to obtain current memory policy. 00:27:55.036 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:55.036 EAL: Restoring previous memory policy: 4 00:27:55.036 EAL: Calling mem event callback 'spdk:(nil)' 00:27:55.036 EAL: request: mp_malloc_sync 00:27:55.036 EAL: No shared files mode enabled, IPC is disabled 00:27:55.036 EAL: Heap on socket 0 was expanded by 34MB 00:27:55.036 EAL: Calling mem event callback 'spdk:(nil)' 00:27:55.036 EAL: request: mp_malloc_sync 00:27:55.036 EAL: No shared files mode enabled, IPC is disabled 00:27:55.036 EAL: Heap on socket 0 was shrunk by 34MB 00:27:55.294 EAL: Trying to obtain current memory policy. 00:27:55.294 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:55.294 EAL: Restoring previous memory policy: 4 00:27:55.294 EAL: Calling mem event callback 'spdk:(nil)' 00:27:55.294 EAL: request: mp_malloc_sync 00:27:55.294 EAL: No shared files mode enabled, IPC is disabled 00:27:55.294 EAL: Heap on socket 0 was expanded by 66MB 00:27:55.294 EAL: Calling mem event callback 'spdk:(nil)' 00:27:55.294 EAL: request: mp_malloc_sync 00:27:55.294 EAL: No shared files mode enabled, IPC is disabled 00:27:55.294 EAL: Heap on socket 0 was shrunk by 66MB 00:27:55.553 EAL: Trying to obtain current memory policy. 00:27:55.553 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:55.553 EAL: Restoring previous memory policy: 4 00:27:55.553 EAL: Calling mem event callback 'spdk:(nil)' 00:27:55.553 EAL: request: mp_malloc_sync 00:27:55.553 EAL: No shared files mode enabled, IPC is disabled 00:27:55.553 EAL: Heap on socket 0 was expanded by 130MB 00:27:55.812 EAL: Calling mem event callback 'spdk:(nil)' 00:27:55.812 EAL: request: mp_malloc_sync 00:27:55.812 EAL: No shared files mode enabled, IPC is disabled 00:27:55.812 EAL: Heap on socket 0 was shrunk by 130MB 00:27:56.070 EAL: Trying to obtain current memory policy. 00:27:56.070 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:56.070 EAL: Restoring previous memory policy: 4 00:27:56.070 EAL: Calling mem event callback 'spdk:(nil)' 00:27:56.070 EAL: request: mp_malloc_sync 00:27:56.070 EAL: No shared files mode enabled, IPC is disabled 00:27:56.070 EAL: Heap on socket 0 was expanded by 258MB 00:27:56.636 EAL: Calling mem event callback 'spdk:(nil)' 00:27:56.636 EAL: request: mp_malloc_sync 00:27:56.636 EAL: No shared files mode enabled, IPC is disabled 00:27:56.636 EAL: Heap on socket 0 was shrunk by 258MB 00:27:57.206 EAL: Trying to obtain current memory policy. 00:27:57.206 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:57.206 EAL: Restoring previous memory policy: 4 00:27:57.206 EAL: Calling mem event callback 'spdk:(nil)' 00:27:57.206 EAL: request: mp_malloc_sync 00:27:57.206 EAL: No shared files mode enabled, IPC is disabled 00:27:57.207 EAL: Heap on socket 0 was expanded by 514MB 00:27:58.140 EAL: Calling mem event callback 'spdk:(nil)' 00:27:58.398 EAL: request: mp_malloc_sync 00:27:58.398 EAL: No shared files mode enabled, IPC is disabled 00:27:58.398 EAL: Heap on socket 0 was shrunk by 514MB 00:27:59.333 EAL: Trying to obtain current memory policy. 00:27:59.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:59.590 EAL: Restoring previous memory policy: 4 00:27:59.590 EAL: Calling mem event callback 'spdk:(nil)' 00:27:59.590 EAL: request: mp_malloc_sync 00:27:59.590 EAL: No shared files mode enabled, IPC is disabled 00:27:59.590 EAL: Heap on socket 0 was expanded by 1026MB 00:28:01.493 EAL: Calling mem event callback 'spdk:(nil)' 00:28:01.751 EAL: request: mp_malloc_sync 00:28:01.751 EAL: No shared files mode enabled, IPC is disabled 00:28:01.751 EAL: Heap on socket 0 was shrunk by 1026MB 00:28:04.334 passed 00:28:04.334 00:28:04.334 Run Summary: Type Total Ran Passed Failed Inactive 00:28:04.334 suites 1 1 n/a 0 0 00:28:04.334 tests 2 2 2 0 0 00:28:04.334 asserts 5586 5586 5586 0 n/a 00:28:04.334 00:28:04.334 Elapsed time = 9.391 seconds 00:28:04.334 EAL: Calling mem event callback 'spdk:(nil)' 00:28:04.334 EAL: request: mp_malloc_sync 00:28:04.334 EAL: No shared files mode enabled, IPC is disabled 00:28:04.334 EAL: Heap on socket 0 was shrunk by 2MB 00:28:04.334 EAL: No shared files mode enabled, IPC is disabled 00:28:04.334 EAL: No shared files mode enabled, IPC is disabled 00:28:04.334 EAL: No shared files mode enabled, IPC is disabled 00:28:04.334 00:28:04.334 real 0m9.776s 00:28:04.334 user 0m8.607s 00:28:04.334 sys 0m0.995s 00:28:04.334 13:49:01 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.334 13:49:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:28:04.334 ************************************ 00:28:04.334 END TEST env_vtophys 00:28:04.334 ************************************ 00:28:04.334 13:49:01 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:28:04.334 13:49:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:04.334 13:49:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.334 13:49:01 env -- common/autotest_common.sh@10 -- # set +x 00:28:04.334 ************************************ 00:28:04.334 START TEST env_pci 00:28:04.334 ************************************ 00:28:04.334 13:49:01 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:28:04.334 00:28:04.334 00:28:04.334 CUnit - A unit testing framework for C - Version 2.1-3 00:28:04.334 http://cunit.sourceforge.net/ 00:28:04.334 00:28:04.334 00:28:04.334 Suite: pci 00:28:04.334 Test: pci_hook ...[2024-11-20 13:49:01.158099] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57889 has claimed it 00:28:04.334 passed 00:28:04.334 00:28:04.334 EAL: Cannot find device (10000:00:01.0) 00:28:04.334 EAL: Failed to attach device on primary process 00:28:04.334 Run Summary: Type Total Ran Passed Failed Inactive 00:28:04.334 suites 1 1 n/a 0 0 00:28:04.334 tests 1 1 1 0 0 00:28:04.334 asserts 25 25 25 0 n/a 00:28:04.334 00:28:04.334 Elapsed time = 0.007 seconds 00:28:04.334 00:28:04.334 real 0m0.094s 00:28:04.334 user 0m0.042s 00:28:04.334 sys 0m0.051s 00:28:04.334 13:49:01 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.334 13:49:01 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:28:04.334 ************************************ 00:28:04.334 END TEST env_pci 00:28:04.334 ************************************ 00:28:04.334 13:49:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:28:04.334 13:49:01 env -- env/env.sh@15 -- # uname 00:28:04.334 13:49:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:28:04.334 13:49:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:28:04.334 13:49:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:28:04.334 13:49:01 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:04.334 13:49:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.334 13:49:01 env -- common/autotest_common.sh@10 -- # set +x 00:28:04.334 ************************************ 00:28:04.334 START TEST env_dpdk_post_init 00:28:04.334 ************************************ 00:28:04.334 13:49:01 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:28:04.334 EAL: Detected CPU lcores: 10 00:28:04.334 EAL: Detected NUMA nodes: 1 00:28:04.334 EAL: Detected shared linkage of DPDK 00:28:04.334 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:28:04.334 EAL: Selected IOVA mode 'PA' 00:28:04.334 TELEMETRY: No legacy callbacks, legacy socket not created 00:28:04.334 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:28:04.334 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:28:04.334 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:28:04.334 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:28:04.334 Starting DPDK initialization... 00:28:04.334 Starting SPDK post initialization... 00:28:04.334 SPDK NVMe probe 00:28:04.334 Attaching to 0000:00:10.0 00:28:04.334 Attaching to 0000:00:11.0 00:28:04.334 Attaching to 0000:00:12.0 00:28:04.334 Attaching to 0000:00:13.0 00:28:04.334 Attached to 0000:00:10.0 00:28:04.334 Attached to 0000:00:11.0 00:28:04.334 Attached to 0000:00:13.0 00:28:04.335 Attached to 0000:00:12.0 00:28:04.335 Cleaning up... 00:28:04.335 00:28:04.335 real 0m0.333s 00:28:04.335 user 0m0.123s 00:28:04.335 sys 0m0.116s 00:28:04.335 13:49:01 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.335 13:49:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:28:04.335 ************************************ 00:28:04.335 END TEST env_dpdk_post_init 00:28:04.335 ************************************ 00:28:04.613 13:49:01 env -- env/env.sh@26 -- # uname 00:28:04.613 13:49:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:28:04.613 13:49:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:28:04.613 13:49:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:04.613 13:49:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.613 13:49:01 env -- common/autotest_common.sh@10 -- # set +x 00:28:04.613 ************************************ 00:28:04.613 START TEST env_mem_callbacks 00:28:04.613 ************************************ 00:28:04.613 13:49:01 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:28:04.613 EAL: Detected CPU lcores: 10 00:28:04.613 EAL: Detected NUMA nodes: 1 00:28:04.613 EAL: Detected shared linkage of DPDK 00:28:04.613 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:28:04.613 EAL: Selected IOVA mode 'PA' 00:28:04.613 TELEMETRY: No legacy callbacks, legacy socket not created 00:28:04.613 00:28:04.613 00:28:04.613 CUnit - A unit testing framework for C - Version 2.1-3 00:28:04.613 http://cunit.sourceforge.net/ 00:28:04.613 00:28:04.613 00:28:04.613 Suite: memory 00:28:04.613 Test: test ... 00:28:04.613 register 0x200000200000 2097152 00:28:04.613 malloc 3145728 00:28:04.613 register 0x200000400000 4194304 00:28:04.613 buf 0x2000004fffc0 len 3145728 PASSED 00:28:04.613 malloc 64 00:28:04.613 buf 0x2000004ffec0 len 64 PASSED 00:28:04.613 malloc 4194304 00:28:04.613 register 0x200000800000 6291456 00:28:04.613 buf 0x2000009fffc0 len 4194304 PASSED 00:28:04.613 free 0x2000004fffc0 3145728 00:28:04.613 free 0x2000004ffec0 64 00:28:04.613 unregister 0x200000400000 4194304 PASSED 00:28:04.613 free 0x2000009fffc0 4194304 00:28:04.613 unregister 0x200000800000 6291456 PASSED 00:28:04.881 malloc 8388608 00:28:04.881 register 0x200000400000 10485760 00:28:04.881 buf 0x2000005fffc0 len 8388608 PASSED 00:28:04.881 free 0x2000005fffc0 8388608 00:28:04.881 unregister 0x200000400000 10485760 PASSED 00:28:04.881 passed 00:28:04.881 00:28:04.881 Run Summary: Type Total Ran Passed Failed Inactive 00:28:04.881 suites 1 1 n/a 0 0 00:28:04.881 tests 1 1 1 0 0 00:28:04.881 asserts 15 15 15 0 n/a 00:28:04.881 00:28:04.881 Elapsed time = 0.111 seconds 00:28:04.881 00:28:04.881 real 0m0.344s 00:28:04.881 user 0m0.150s 00:28:04.881 sys 0m0.091s 00:28:04.881 13:49:02 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.882 13:49:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:28:04.882 ************************************ 00:28:04.882 END TEST env_mem_callbacks 00:28:04.882 ************************************ 00:28:04.882 00:28:04.882 real 0m11.495s 00:28:04.882 user 0m9.535s 00:28:04.882 sys 0m1.587s 00:28:04.882 13:49:02 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.882 13:49:02 env -- common/autotest_common.sh@10 -- # set +x 00:28:04.882 ************************************ 00:28:04.882 END TEST env 00:28:04.882 ************************************ 00:28:04.882 13:49:02 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:28:04.882 13:49:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:04.882 13:49:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.882 13:49:02 -- common/autotest_common.sh@10 -- # set +x 00:28:04.882 ************************************ 00:28:04.882 START TEST rpc 00:28:04.882 ************************************ 00:28:04.882 13:49:02 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:28:04.882 * Looking for test storage... 00:28:05.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:28:05.142 13:49:02 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:05.142 13:49:02 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:28:05.142 13:49:02 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:05.142 13:49:02 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:05.142 13:49:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:05.142 13:49:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:05.142 13:49:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:05.142 13:49:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:28:05.142 13:49:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:28:05.142 13:49:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:28:05.142 13:49:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:28:05.142 13:49:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:28:05.142 13:49:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:28:05.142 13:49:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:28:05.142 13:49:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:05.142 13:49:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:28:05.142 13:49:02 rpc -- scripts/common.sh@345 -- # : 1 00:28:05.142 13:49:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:05.142 13:49:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:05.142 13:49:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:28:05.142 13:49:02 rpc -- scripts/common.sh@353 -- # local d=1 00:28:05.142 13:49:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:05.142 13:49:02 rpc -- scripts/common.sh@355 -- # echo 1 00:28:05.142 13:49:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:05.142 13:49:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:28:05.142 13:49:02 rpc -- scripts/common.sh@353 -- # local d=2 00:28:05.142 13:49:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:05.142 13:49:02 rpc -- scripts/common.sh@355 -- # echo 2 00:28:05.142 13:49:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:05.142 13:49:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:05.142 13:49:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:05.142 13:49:02 rpc -- scripts/common.sh@368 -- # return 0 00:28:05.142 13:49:02 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:05.142 13:49:02 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:05.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.142 --rc genhtml_branch_coverage=1 00:28:05.142 --rc genhtml_function_coverage=1 00:28:05.142 --rc genhtml_legend=1 00:28:05.142 --rc geninfo_all_blocks=1 00:28:05.142 --rc geninfo_unexecuted_blocks=1 00:28:05.142 00:28:05.142 ' 00:28:05.142 13:49:02 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:05.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.142 --rc genhtml_branch_coverage=1 00:28:05.142 --rc genhtml_function_coverage=1 00:28:05.142 --rc genhtml_legend=1 00:28:05.142 --rc geninfo_all_blocks=1 00:28:05.142 --rc geninfo_unexecuted_blocks=1 00:28:05.142 00:28:05.142 ' 00:28:05.142 13:49:02 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:05.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.142 --rc genhtml_branch_coverage=1 00:28:05.142 --rc genhtml_function_coverage=1 00:28:05.142 --rc genhtml_legend=1 00:28:05.142 --rc geninfo_all_blocks=1 00:28:05.142 --rc geninfo_unexecuted_blocks=1 00:28:05.142 00:28:05.142 ' 00:28:05.142 13:49:02 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:05.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.142 --rc genhtml_branch_coverage=1 00:28:05.142 --rc genhtml_function_coverage=1 00:28:05.142 --rc genhtml_legend=1 00:28:05.142 --rc geninfo_all_blocks=1 00:28:05.142 --rc geninfo_unexecuted_blocks=1 00:28:05.142 00:28:05.142 ' 00:28:05.142 13:49:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58016 00:28:05.142 13:49:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:28:05.142 13:49:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58016 00:28:05.143 13:49:02 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:28:05.143 13:49:02 rpc -- common/autotest_common.sh@835 -- # '[' -z 58016 ']' 00:28:05.143 13:49:02 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.143 13:49:02 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:05.143 13:49:02 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.143 13:49:02 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:05.143 13:49:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:05.143 [2024-11-20 13:49:02.402623] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:28:05.143 [2024-11-20 13:49:02.402799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58016 ] 00:28:05.402 [2024-11-20 13:49:02.592771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.662 [2024-11-20 13:49:02.774903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:28:05.662 [2024-11-20 13:49:02.774992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58016' to capture a snapshot of events at runtime. 00:28:05.662 [2024-11-20 13:49:02.775014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.662 [2024-11-20 13:49:02.775042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.662 [2024-11-20 13:49:02.775057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58016 for offline analysis/debug. 00:28:05.662 [2024-11-20 13:49:02.777208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.599 13:49:03 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.599 13:49:03 rpc -- common/autotest_common.sh@868 -- # return 0 00:28:06.599 13:49:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:28:06.599 13:49:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:28:06.599 13:49:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:28:06.599 13:49:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:28:06.599 13:49:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:06.599 13:49:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:06.599 13:49:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:06.599 ************************************ 00:28:06.599 START TEST rpc_integrity 00:28:06.599 ************************************ 00:28:06.599 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:28:06.599 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:06.599 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.599 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:06.599 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.599 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:28:06.599 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:28:06.599 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:28:06.599 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:28:06.599 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.599 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:06.599 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.599 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:28:06.599 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:28:06.599 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.599 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:06.599 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.599 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:28:06.599 { 00:28:06.599 "name": "Malloc0", 00:28:06.599 "aliases": [ 00:28:06.599 "f67eab1f-670c-4008-a884-3338bed7e5a8" 00:28:06.599 ], 00:28:06.599 "product_name": "Malloc disk", 00:28:06.599 "block_size": 512, 00:28:06.599 "num_blocks": 16384, 00:28:06.599 "uuid": "f67eab1f-670c-4008-a884-3338bed7e5a8", 00:28:06.599 "assigned_rate_limits": { 00:28:06.600 "rw_ios_per_sec": 0, 00:28:06.600 "rw_mbytes_per_sec": 0, 00:28:06.600 "r_mbytes_per_sec": 0, 00:28:06.600 "w_mbytes_per_sec": 0 00:28:06.600 }, 00:28:06.600 "claimed": false, 00:28:06.600 "zoned": false, 00:28:06.600 "supported_io_types": { 00:28:06.600 "read": true, 00:28:06.600 "write": true, 00:28:06.600 "unmap": true, 00:28:06.600 "flush": true, 00:28:06.600 "reset": true, 00:28:06.600 "nvme_admin": false, 00:28:06.600 "nvme_io": false, 00:28:06.600 "nvme_io_md": false, 00:28:06.600 "write_zeroes": true, 00:28:06.600 "zcopy": true, 00:28:06.600 "get_zone_info": false, 00:28:06.600 "zone_management": false, 00:28:06.600 "zone_append": false, 00:28:06.600 "compare": false, 00:28:06.600 "compare_and_write": false, 00:28:06.600 "abort": true, 00:28:06.600 "seek_hole": false, 00:28:06.600 "seek_data": false, 00:28:06.600 "copy": true, 00:28:06.600 "nvme_iov_md": false 00:28:06.600 }, 00:28:06.600 "memory_domains": [ 00:28:06.600 { 00:28:06.600 "dma_device_id": "system", 00:28:06.600 "dma_device_type": 1 00:28:06.600 }, 00:28:06.600 { 00:28:06.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.600 "dma_device_type": 2 00:28:06.600 } 00:28:06.600 ], 00:28:06.600 "driver_specific": {} 00:28:06.600 } 00:28:06.600 ]' 00:28:06.600 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:28:06.600 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:28:06.600 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:28:06.600 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.600 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:06.600 [2024-11-20 13:49:03.902873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:28:06.600 [2024-11-20 13:49:03.902963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:06.600 [2024-11-20 13:49:03.903004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:06.600 [2024-11-20 13:49:03.903021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:06.600 [2024-11-20 13:49:03.906041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:06.600 [2024-11-20 13:49:03.906099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:28:06.600 Passthru0 00:28:06.600 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.600 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:28:06.600 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.600 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:06.859 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.859 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:28:06.859 { 00:28:06.859 "name": "Malloc0", 00:28:06.859 "aliases": [ 00:28:06.859 "f67eab1f-670c-4008-a884-3338bed7e5a8" 00:28:06.859 ], 00:28:06.859 "product_name": "Malloc disk", 00:28:06.859 "block_size": 512, 00:28:06.859 "num_blocks": 16384, 00:28:06.859 "uuid": "f67eab1f-670c-4008-a884-3338bed7e5a8", 00:28:06.859 "assigned_rate_limits": { 00:28:06.859 "rw_ios_per_sec": 0, 00:28:06.859 "rw_mbytes_per_sec": 0, 00:28:06.859 "r_mbytes_per_sec": 0, 00:28:06.859 "w_mbytes_per_sec": 0 00:28:06.859 }, 00:28:06.859 "claimed": true, 00:28:06.859 "claim_type": "exclusive_write", 00:28:06.859 "zoned": false, 00:28:06.859 "supported_io_types": { 00:28:06.859 "read": true, 00:28:06.859 "write": true, 00:28:06.859 "unmap": true, 00:28:06.859 "flush": true, 00:28:06.859 "reset": true, 00:28:06.859 "nvme_admin": false, 00:28:06.859 "nvme_io": false, 00:28:06.859 "nvme_io_md": false, 00:28:06.859 "write_zeroes": true, 00:28:06.859 "zcopy": true, 00:28:06.859 "get_zone_info": false, 00:28:06.859 "zone_management": false, 00:28:06.859 "zone_append": false, 00:28:06.859 "compare": false, 00:28:06.859 "compare_and_write": false, 00:28:06.859 "abort": true, 00:28:06.859 "seek_hole": false, 00:28:06.859 "seek_data": false, 00:28:06.859 "copy": true, 00:28:06.859 "nvme_iov_md": false 00:28:06.859 }, 00:28:06.859 "memory_domains": [ 00:28:06.859 { 00:28:06.859 "dma_device_id": "system", 00:28:06.859 "dma_device_type": 1 00:28:06.859 }, 00:28:06.859 { 00:28:06.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.859 "dma_device_type": 2 00:28:06.859 } 00:28:06.859 ], 00:28:06.859 "driver_specific": {} 00:28:06.859 }, 00:28:06.859 { 00:28:06.859 "name": "Passthru0", 00:28:06.859 "aliases": [ 00:28:06.859 "f5e69afa-c070-57c6-b685-ef08f2df3417" 00:28:06.859 ], 00:28:06.859 "product_name": "passthru", 00:28:06.859 "block_size": 512, 00:28:06.859 "num_blocks": 16384, 00:28:06.859 "uuid": "f5e69afa-c070-57c6-b685-ef08f2df3417", 00:28:06.859 "assigned_rate_limits": { 00:28:06.859 "rw_ios_per_sec": 0, 00:28:06.859 "rw_mbytes_per_sec": 0, 00:28:06.859 "r_mbytes_per_sec": 0, 00:28:06.859 "w_mbytes_per_sec": 0 00:28:06.859 }, 00:28:06.859 "claimed": false, 00:28:06.859 "zoned": false, 00:28:06.859 "supported_io_types": { 00:28:06.859 "read": true, 00:28:06.859 "write": true, 00:28:06.859 "unmap": true, 00:28:06.859 "flush": true, 00:28:06.859 "reset": true, 00:28:06.859 "nvme_admin": false, 00:28:06.859 "nvme_io": false, 00:28:06.859 "nvme_io_md": false, 00:28:06.859 "write_zeroes": true, 00:28:06.859 "zcopy": true, 00:28:06.859 "get_zone_info": false, 00:28:06.859 "zone_management": false, 00:28:06.859 "zone_append": false, 00:28:06.859 "compare": false, 00:28:06.859 "compare_and_write": false, 00:28:06.859 "abort": true, 00:28:06.859 "seek_hole": false, 00:28:06.859 "seek_data": false, 00:28:06.859 "copy": true, 00:28:06.859 "nvme_iov_md": false 00:28:06.860 }, 00:28:06.860 "memory_domains": [ 00:28:06.860 { 00:28:06.860 "dma_device_id": "system", 00:28:06.860 "dma_device_type": 1 00:28:06.860 }, 00:28:06.860 { 00:28:06.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.860 "dma_device_type": 2 00:28:06.860 } 00:28:06.860 ], 00:28:06.860 "driver_specific": { 00:28:06.860 "passthru": { 00:28:06.860 "name": "Passthru0", 00:28:06.860 "base_bdev_name": "Malloc0" 00:28:06.860 } 00:28:06.860 } 00:28:06.860 } 00:28:06.860 ]' 00:28:06.860 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:28:06.860 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:28:06.860 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:28:06.860 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.860 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:06.860 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.860 13:49:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:06.860 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.860 13:49:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:06.860 13:49:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.860 13:49:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:28:06.860 13:49:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.860 13:49:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:06.860 13:49:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.860 13:49:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:28:06.860 13:49:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:28:06.860 13:49:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:28:06.860 00:28:06.860 real 0m0.294s 00:28:06.860 user 0m0.141s 00:28:06.860 sys 0m0.055s 00:28:06.860 13:49:04 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:06.860 13:49:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:06.860 ************************************ 00:28:06.860 END TEST rpc_integrity 00:28:06.860 ************************************ 00:28:06.860 13:49:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:28:06.860 13:49:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:06.860 13:49:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:06.860 13:49:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:06.860 ************************************ 00:28:06.860 START TEST rpc_plugins 00:28:06.860 ************************************ 00:28:06.860 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:28:06.860 13:49:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:28:06.860 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.860 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:06.860 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.860 13:49:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:28:06.860 13:49:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:28:06.860 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.860 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:06.860 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.860 13:49:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:28:06.860 { 00:28:06.860 "name": "Malloc1", 00:28:06.860 "aliases": [ 00:28:06.860 "e03d6e72-ca35-4052-8730-3fabe7ba6341" 00:28:06.860 ], 00:28:06.860 "product_name": "Malloc disk", 00:28:06.860 "block_size": 4096, 00:28:06.860 "num_blocks": 256, 00:28:06.860 "uuid": "e03d6e72-ca35-4052-8730-3fabe7ba6341", 00:28:06.860 "assigned_rate_limits": { 00:28:06.860 "rw_ios_per_sec": 0, 00:28:06.860 "rw_mbytes_per_sec": 0, 00:28:06.860 "r_mbytes_per_sec": 0, 00:28:06.860 "w_mbytes_per_sec": 0 00:28:06.860 }, 00:28:06.860 "claimed": false, 00:28:06.860 "zoned": false, 00:28:06.860 "supported_io_types": { 00:28:06.860 "read": true, 00:28:06.860 "write": true, 00:28:06.860 "unmap": true, 00:28:06.860 "flush": true, 00:28:06.860 "reset": true, 00:28:06.860 "nvme_admin": false, 00:28:06.860 "nvme_io": false, 00:28:06.860 "nvme_io_md": false, 00:28:06.860 "write_zeroes": true, 00:28:06.860 "zcopy": true, 00:28:06.860 "get_zone_info": false, 00:28:06.860 "zone_management": false, 00:28:06.860 "zone_append": false, 00:28:06.860 "compare": false, 00:28:06.860 "compare_and_write": false, 00:28:06.860 "abort": true, 00:28:06.860 "seek_hole": false, 00:28:06.860 "seek_data": false, 00:28:06.860 "copy": true, 00:28:06.860 "nvme_iov_md": false 00:28:06.860 }, 00:28:06.860 "memory_domains": [ 00:28:06.860 { 00:28:06.860 "dma_device_id": "system", 00:28:06.860 "dma_device_type": 1 00:28:06.860 }, 00:28:06.860 { 00:28:06.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.860 "dma_device_type": 2 00:28:06.860 } 00:28:06.860 ], 00:28:06.860 "driver_specific": {} 00:28:06.860 } 00:28:06.860 ]' 00:28:06.860 13:49:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:28:07.119 13:49:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:28:07.119 13:49:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:28:07.119 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.119 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:07.119 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.119 13:49:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:28:07.119 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.119 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:07.119 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.119 13:49:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:28:07.119 13:49:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:28:07.119 13:49:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:28:07.119 00:28:07.119 real 0m0.182s 00:28:07.119 user 0m0.120s 00:28:07.119 sys 0m0.021s 00:28:07.119 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:07.119 13:49:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:07.119 ************************************ 00:28:07.119 END TEST rpc_plugins 00:28:07.119 ************************************ 00:28:07.119 13:49:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:28:07.119 13:49:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:07.119 13:49:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.119 13:49:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:07.119 ************************************ 00:28:07.119 START TEST rpc_trace_cmd_test 00:28:07.119 ************************************ 00:28:07.119 13:49:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:28:07.119 13:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:28:07.119 13:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:28:07.119 13:49:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.119 13:49:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.119 13:49:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.119 13:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:28:07.119 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58016", 00:28:07.119 "tpoint_group_mask": "0x8", 00:28:07.119 "iscsi_conn": { 00:28:07.119 "mask": "0x2", 00:28:07.119 "tpoint_mask": "0x0" 00:28:07.119 }, 00:28:07.119 "scsi": { 00:28:07.119 "mask": "0x4", 00:28:07.119 "tpoint_mask": "0x0" 00:28:07.119 }, 00:28:07.119 "bdev": { 00:28:07.119 "mask": "0x8", 00:28:07.119 "tpoint_mask": "0xffffffffffffffff" 00:28:07.119 }, 00:28:07.119 "nvmf_rdma": { 00:28:07.119 "mask": "0x10", 00:28:07.119 "tpoint_mask": "0x0" 00:28:07.119 }, 00:28:07.119 "nvmf_tcp": { 00:28:07.119 "mask": "0x20", 00:28:07.119 "tpoint_mask": "0x0" 00:28:07.119 }, 00:28:07.119 "ftl": { 00:28:07.119 "mask": "0x40", 00:28:07.119 "tpoint_mask": "0x0" 00:28:07.119 }, 00:28:07.119 "blobfs": { 00:28:07.119 "mask": "0x80", 00:28:07.119 "tpoint_mask": "0x0" 00:28:07.119 }, 00:28:07.119 "dsa": { 00:28:07.119 "mask": "0x200", 00:28:07.119 "tpoint_mask": "0x0" 00:28:07.119 }, 00:28:07.119 "thread": { 00:28:07.119 "mask": "0x400", 00:28:07.119 "tpoint_mask": "0x0" 00:28:07.119 }, 00:28:07.119 "nvme_pcie": { 00:28:07.119 "mask": "0x800", 00:28:07.119 "tpoint_mask": "0x0" 00:28:07.119 }, 00:28:07.119 "iaa": { 00:28:07.119 "mask": "0x1000", 00:28:07.119 "tpoint_mask": "0x0" 00:28:07.119 }, 00:28:07.120 "nvme_tcp": { 00:28:07.120 "mask": "0x2000", 00:28:07.120 "tpoint_mask": "0x0" 00:28:07.120 }, 00:28:07.120 "bdev_nvme": { 00:28:07.120 "mask": "0x4000", 00:28:07.120 "tpoint_mask": "0x0" 00:28:07.120 }, 00:28:07.120 "sock": { 00:28:07.120 "mask": "0x8000", 00:28:07.120 "tpoint_mask": "0x0" 00:28:07.120 }, 00:28:07.120 "blob": { 00:28:07.120 "mask": "0x10000", 00:28:07.120 "tpoint_mask": "0x0" 00:28:07.120 }, 00:28:07.120 "bdev_raid": { 00:28:07.120 "mask": "0x20000", 00:28:07.120 "tpoint_mask": "0x0" 00:28:07.120 }, 00:28:07.120 "scheduler": { 00:28:07.120 "mask": "0x40000", 00:28:07.120 "tpoint_mask": "0x0" 00:28:07.120 } 00:28:07.120 }' 00:28:07.120 13:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:28:07.120 13:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:28:07.120 13:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:28:07.379 13:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:28:07.379 13:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:28:07.379 13:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:28:07.379 13:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:28:07.379 13:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:28:07.379 13:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:28:07.379 13:49:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:28:07.379 00:28:07.379 real 0m0.250s 00:28:07.379 user 0m0.207s 00:28:07.379 sys 0m0.036s 00:28:07.379 13:49:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:07.379 13:49:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.379 ************************************ 00:28:07.379 END TEST rpc_trace_cmd_test 00:28:07.379 ************************************ 00:28:07.379 13:49:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:28:07.379 13:49:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:28:07.379 13:49:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:28:07.379 13:49:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:07.379 13:49:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.379 13:49:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:07.379 ************************************ 00:28:07.379 START TEST rpc_daemon_integrity 00:28:07.379 ************************************ 00:28:07.379 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:28:07.379 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:07.379 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.379 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:07.379 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.379 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:28:07.379 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:28:07.638 { 00:28:07.638 "name": "Malloc2", 00:28:07.638 "aliases": [ 00:28:07.638 "40721046-c4b5-46a2-bb26-ddabd8bb2565" 00:28:07.638 ], 00:28:07.638 "product_name": "Malloc disk", 00:28:07.638 "block_size": 512, 00:28:07.638 "num_blocks": 16384, 00:28:07.638 "uuid": "40721046-c4b5-46a2-bb26-ddabd8bb2565", 00:28:07.638 "assigned_rate_limits": { 00:28:07.638 "rw_ios_per_sec": 0, 00:28:07.638 "rw_mbytes_per_sec": 0, 00:28:07.638 "r_mbytes_per_sec": 0, 00:28:07.638 "w_mbytes_per_sec": 0 00:28:07.638 }, 00:28:07.638 "claimed": false, 00:28:07.638 "zoned": false, 00:28:07.638 "supported_io_types": { 00:28:07.638 "read": true, 00:28:07.638 "write": true, 00:28:07.638 "unmap": true, 00:28:07.638 "flush": true, 00:28:07.638 "reset": true, 00:28:07.638 "nvme_admin": false, 00:28:07.638 "nvme_io": false, 00:28:07.638 "nvme_io_md": false, 00:28:07.638 "write_zeroes": true, 00:28:07.638 "zcopy": true, 00:28:07.638 "get_zone_info": false, 00:28:07.638 "zone_management": false, 00:28:07.638 "zone_append": false, 00:28:07.638 "compare": false, 00:28:07.638 "compare_and_write": false, 00:28:07.638 "abort": true, 00:28:07.638 "seek_hole": false, 00:28:07.638 "seek_data": false, 00:28:07.638 "copy": true, 00:28:07.638 "nvme_iov_md": false 00:28:07.638 }, 00:28:07.638 "memory_domains": [ 00:28:07.638 { 00:28:07.638 "dma_device_id": "system", 00:28:07.638 "dma_device_type": 1 00:28:07.638 }, 00:28:07.638 { 00:28:07.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:07.638 "dma_device_type": 2 00:28:07.638 } 00:28:07.638 ], 00:28:07.638 "driver_specific": {} 00:28:07.638 } 00:28:07.638 ]' 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:07.638 [2024-11-20 13:49:04.869128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:28:07.638 [2024-11-20 13:49:04.869213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:07.638 [2024-11-20 13:49:04.869244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:07.638 [2024-11-20 13:49:04.869261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:07.638 [2024-11-20 13:49:04.872039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:07.638 [2024-11-20 13:49:04.872083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:28:07.638 Passthru0 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.638 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:28:07.638 { 00:28:07.638 "name": "Malloc2", 00:28:07.638 "aliases": [ 00:28:07.638 "40721046-c4b5-46a2-bb26-ddabd8bb2565" 00:28:07.638 ], 00:28:07.638 "product_name": "Malloc disk", 00:28:07.638 "block_size": 512, 00:28:07.638 "num_blocks": 16384, 00:28:07.638 "uuid": "40721046-c4b5-46a2-bb26-ddabd8bb2565", 00:28:07.638 "assigned_rate_limits": { 00:28:07.638 "rw_ios_per_sec": 0, 00:28:07.638 "rw_mbytes_per_sec": 0, 00:28:07.638 "r_mbytes_per_sec": 0, 00:28:07.638 "w_mbytes_per_sec": 0 00:28:07.638 }, 00:28:07.638 "claimed": true, 00:28:07.638 "claim_type": "exclusive_write", 00:28:07.638 "zoned": false, 00:28:07.638 "supported_io_types": { 00:28:07.638 "read": true, 00:28:07.638 "write": true, 00:28:07.638 "unmap": true, 00:28:07.638 "flush": true, 00:28:07.638 "reset": true, 00:28:07.638 "nvme_admin": false, 00:28:07.638 "nvme_io": false, 00:28:07.638 "nvme_io_md": false, 00:28:07.638 "write_zeroes": true, 00:28:07.638 "zcopy": true, 00:28:07.638 "get_zone_info": false, 00:28:07.638 "zone_management": false, 00:28:07.638 "zone_append": false, 00:28:07.638 "compare": false, 00:28:07.638 "compare_and_write": false, 00:28:07.638 "abort": true, 00:28:07.638 "seek_hole": false, 00:28:07.638 "seek_data": false, 00:28:07.638 "copy": true, 00:28:07.638 "nvme_iov_md": false 00:28:07.638 }, 00:28:07.638 "memory_domains": [ 00:28:07.638 { 00:28:07.638 "dma_device_id": "system", 00:28:07.638 "dma_device_type": 1 00:28:07.638 }, 00:28:07.638 { 00:28:07.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:07.638 "dma_device_type": 2 00:28:07.638 } 00:28:07.638 ], 00:28:07.638 "driver_specific": {} 00:28:07.638 }, 00:28:07.638 { 00:28:07.638 "name": "Passthru0", 00:28:07.638 "aliases": [ 00:28:07.638 "fc46257f-abc7-590f-abc8-525967283433" 00:28:07.638 ], 00:28:07.638 "product_name": "passthru", 00:28:07.638 "block_size": 512, 00:28:07.638 "num_blocks": 16384, 00:28:07.638 "uuid": "fc46257f-abc7-590f-abc8-525967283433", 00:28:07.638 "assigned_rate_limits": { 00:28:07.638 "rw_ios_per_sec": 0, 00:28:07.638 "rw_mbytes_per_sec": 0, 00:28:07.638 "r_mbytes_per_sec": 0, 00:28:07.638 "w_mbytes_per_sec": 0 00:28:07.638 }, 00:28:07.638 "claimed": false, 00:28:07.638 "zoned": false, 00:28:07.638 "supported_io_types": { 00:28:07.638 "read": true, 00:28:07.638 "write": true, 00:28:07.638 "unmap": true, 00:28:07.639 "flush": true, 00:28:07.639 "reset": true, 00:28:07.639 "nvme_admin": false, 00:28:07.639 "nvme_io": false, 00:28:07.639 "nvme_io_md": false, 00:28:07.639 "write_zeroes": true, 00:28:07.639 "zcopy": true, 00:28:07.639 "get_zone_info": false, 00:28:07.639 "zone_management": false, 00:28:07.639 "zone_append": false, 00:28:07.639 "compare": false, 00:28:07.639 "compare_and_write": false, 00:28:07.639 "abort": true, 00:28:07.639 "seek_hole": false, 00:28:07.639 "seek_data": false, 00:28:07.639 "copy": true, 00:28:07.639 "nvme_iov_md": false 00:28:07.639 }, 00:28:07.639 "memory_domains": [ 00:28:07.639 { 00:28:07.639 "dma_device_id": "system", 00:28:07.639 "dma_device_type": 1 00:28:07.639 }, 00:28:07.639 { 00:28:07.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:07.639 "dma_device_type": 2 00:28:07.639 } 00:28:07.639 ], 00:28:07.639 "driver_specific": { 00:28:07.639 "passthru": { 00:28:07.639 "name": "Passthru0", 00:28:07.639 "base_bdev_name": "Malloc2" 00:28:07.639 } 00:28:07.639 } 00:28:07.639 } 00:28:07.639 ]' 00:28:07.639 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:28:07.639 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:28:07.639 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:28:07.639 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.639 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:07.997 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.997 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:28:07.997 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.997 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:07.997 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.997 13:49:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:28:07.997 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.997 13:49:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:07.997 13:49:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.997 13:49:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:28:07.997 13:49:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:28:07.997 13:49:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:28:07.997 00:28:07.997 real 0m0.391s 00:28:07.997 user 0m0.228s 00:28:07.997 sys 0m0.059s 00:28:07.997 13:49:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:07.997 13:49:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:07.997 ************************************ 00:28:07.997 END TEST rpc_daemon_integrity 00:28:07.997 ************************************ 00:28:07.997 13:49:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:07.997 13:49:05 rpc -- rpc/rpc.sh@84 -- # killprocess 58016 00:28:07.997 13:49:05 rpc -- common/autotest_common.sh@954 -- # '[' -z 58016 ']' 00:28:07.997 13:49:05 rpc -- common/autotest_common.sh@958 -- # kill -0 58016 00:28:07.997 13:49:05 rpc -- common/autotest_common.sh@959 -- # uname 00:28:07.997 13:49:05 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:07.997 13:49:05 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58016 00:28:07.997 13:49:05 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:07.997 13:49:05 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:07.997 killing process with pid 58016 00:28:07.998 13:49:05 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58016' 00:28:07.998 13:49:05 rpc -- common/autotest_common.sh@973 -- # kill 58016 00:28:07.998 13:49:05 rpc -- common/autotest_common.sh@978 -- # wait 58016 00:28:11.296 00:28:11.296 real 0m5.779s 00:28:11.296 user 0m6.353s 00:28:11.296 sys 0m0.912s 00:28:11.296 13:49:07 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:11.296 ************************************ 00:28:11.296 END TEST rpc 00:28:11.296 ************************************ 00:28:11.296 13:49:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:11.296 13:49:07 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:28:11.296 13:49:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:11.296 13:49:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:11.296 13:49:07 -- common/autotest_common.sh@10 -- # set +x 00:28:11.296 ************************************ 00:28:11.296 START TEST skip_rpc 00:28:11.296 ************************************ 00:28:11.296 13:49:07 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:28:11.296 * Looking for test storage... 00:28:11.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:28:11.296 13:49:08 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:11.296 13:49:08 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:11.296 13:49:08 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:28:11.296 13:49:08 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:11.296 13:49:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:28:11.296 13:49:08 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:11.296 13:49:08 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:11.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.296 --rc genhtml_branch_coverage=1 00:28:11.296 --rc genhtml_function_coverage=1 00:28:11.296 --rc genhtml_legend=1 00:28:11.296 --rc geninfo_all_blocks=1 00:28:11.296 --rc geninfo_unexecuted_blocks=1 00:28:11.296 00:28:11.296 ' 00:28:11.296 13:49:08 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:11.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.296 --rc genhtml_branch_coverage=1 00:28:11.296 --rc genhtml_function_coverage=1 00:28:11.296 --rc genhtml_legend=1 00:28:11.296 --rc geninfo_all_blocks=1 00:28:11.296 --rc geninfo_unexecuted_blocks=1 00:28:11.296 00:28:11.296 ' 00:28:11.296 13:49:08 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:11.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.296 --rc genhtml_branch_coverage=1 00:28:11.296 --rc genhtml_function_coverage=1 00:28:11.296 --rc genhtml_legend=1 00:28:11.296 --rc geninfo_all_blocks=1 00:28:11.296 --rc geninfo_unexecuted_blocks=1 00:28:11.296 00:28:11.296 ' 00:28:11.296 13:49:08 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:11.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.296 --rc genhtml_branch_coverage=1 00:28:11.296 --rc genhtml_function_coverage=1 00:28:11.296 --rc genhtml_legend=1 00:28:11.296 --rc geninfo_all_blocks=1 00:28:11.296 --rc geninfo_unexecuted_blocks=1 00:28:11.296 00:28:11.296 ' 00:28:11.296 13:49:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:28:11.296 13:49:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:28:11.296 13:49:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:28:11.296 13:49:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:11.296 13:49:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:11.296 13:49:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:11.296 ************************************ 00:28:11.296 START TEST skip_rpc 00:28:11.296 ************************************ 00:28:11.296 13:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:28:11.296 13:49:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58250 00:28:11.296 13:49:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:28:11.296 13:49:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:28:11.296 13:49:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:28:11.296 [2024-11-20 13:49:08.272993] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:28:11.296 [2024-11-20 13:49:08.273233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58250 ] 00:28:11.296 [2024-11-20 13:49:08.480343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.554 [2024-11-20 13:49:08.671160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.818 13:49:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:28:16.818 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:28:16.818 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:28:16.818 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:16.818 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.818 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:16.818 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.818 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:28:16.818 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.818 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:16.818 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:16.818 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:28:16.818 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:16.818 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:16.819 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:16.819 13:49:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:28:16.819 13:49:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58250 00:28:16.819 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58250 ']' 00:28:16.819 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58250 00:28:16.819 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:28:16.819 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.819 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58250 00:28:16.819 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:16.819 killing process with pid 58250 00:28:16.819 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:16.819 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58250' 00:28:16.819 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58250 00:28:16.819 13:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58250 00:28:18.734 00:28:18.734 real 0m7.732s 00:28:18.734 user 0m7.170s 00:28:18.734 sys 0m0.459s 00:28:18.734 13:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:18.734 ************************************ 00:28:18.734 END TEST skip_rpc 00:28:18.734 ************************************ 00:28:18.734 13:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:18.734 13:49:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:28:18.734 13:49:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:18.734 13:49:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:18.734 13:49:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:18.734 ************************************ 00:28:18.734 START TEST skip_rpc_with_json 00:28:18.734 ************************************ 00:28:18.734 13:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:28:18.734 13:49:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:28:18.734 13:49:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58360 00:28:18.734 13:49:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:28:18.734 13:49:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:18.734 13:49:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58360 00:28:18.734 13:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58360 ']' 00:28:18.734 13:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.734 13:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:18.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.734 13:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.734 13:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:18.734 13:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:28:18.734 [2024-11-20 13:49:16.052943] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:28:18.734 [2024-11-20 13:49:16.053123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58360 ] 00:28:18.993 [2024-11-20 13:49:16.244728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.252 [2024-11-20 13:49:16.376297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.186 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.186 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:28:20.186 13:49:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:28:20.186 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.186 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:28:20.186 [2024-11-20 13:49:17.367131] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:28:20.186 request: 00:28:20.186 { 00:28:20.186 "trtype": "tcp", 00:28:20.186 "method": "nvmf_get_transports", 00:28:20.186 "req_id": 1 00:28:20.186 } 00:28:20.186 Got JSON-RPC error response 00:28:20.186 response: 00:28:20.186 { 00:28:20.186 "code": -19, 00:28:20.186 "message": "No such device" 00:28:20.186 } 00:28:20.186 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:20.186 13:49:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:28:20.186 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.186 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:28:20.186 [2024-11-20 13:49:17.383282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.186 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.186 13:49:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:28:20.186 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.186 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:28:20.445 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.445 13:49:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:28:20.445 { 00:28:20.445 "subsystems": [ 00:28:20.445 { 00:28:20.445 "subsystem": "fsdev", 00:28:20.445 "config": [ 00:28:20.445 { 00:28:20.445 "method": "fsdev_set_opts", 00:28:20.445 "params": { 00:28:20.445 "fsdev_io_pool_size": 65535, 00:28:20.445 "fsdev_io_cache_size": 256 00:28:20.445 } 00:28:20.445 } 00:28:20.445 ] 00:28:20.445 }, 00:28:20.445 { 00:28:20.445 "subsystem": "keyring", 00:28:20.445 "config": [] 00:28:20.445 }, 00:28:20.445 { 00:28:20.445 "subsystem": "iobuf", 00:28:20.445 "config": [ 00:28:20.445 { 00:28:20.445 "method": "iobuf_set_options", 00:28:20.445 "params": { 00:28:20.445 "small_pool_count": 8192, 00:28:20.445 "large_pool_count": 1024, 00:28:20.445 "small_bufsize": 8192, 00:28:20.445 "large_bufsize": 135168, 00:28:20.445 "enable_numa": false 00:28:20.445 } 00:28:20.445 } 00:28:20.445 ] 00:28:20.445 }, 00:28:20.445 { 00:28:20.445 "subsystem": "sock", 00:28:20.445 "config": [ 00:28:20.445 { 00:28:20.445 "method": "sock_set_default_impl", 00:28:20.445 "params": { 00:28:20.445 "impl_name": "posix" 00:28:20.445 } 00:28:20.445 }, 00:28:20.445 { 00:28:20.445 "method": "sock_impl_set_options", 00:28:20.445 "params": { 00:28:20.445 "impl_name": "ssl", 00:28:20.445 "recv_buf_size": 4096, 00:28:20.445 "send_buf_size": 4096, 00:28:20.445 "enable_recv_pipe": true, 00:28:20.445 "enable_quickack": false, 00:28:20.445 "enable_placement_id": 0, 00:28:20.445 "enable_zerocopy_send_server": true, 00:28:20.445 "enable_zerocopy_send_client": false, 00:28:20.445 "zerocopy_threshold": 0, 00:28:20.445 "tls_version": 0, 00:28:20.445 "enable_ktls": false 00:28:20.445 } 00:28:20.445 }, 00:28:20.445 { 00:28:20.445 "method": "sock_impl_set_options", 00:28:20.445 "params": { 00:28:20.445 "impl_name": "posix", 00:28:20.445 "recv_buf_size": 2097152, 00:28:20.445 "send_buf_size": 2097152, 00:28:20.445 "enable_recv_pipe": true, 00:28:20.446 "enable_quickack": false, 00:28:20.446 "enable_placement_id": 0, 00:28:20.446 "enable_zerocopy_send_server": true, 00:28:20.446 "enable_zerocopy_send_client": false, 00:28:20.446 "zerocopy_threshold": 0, 00:28:20.446 "tls_version": 0, 00:28:20.446 "enable_ktls": false 00:28:20.446 } 00:28:20.446 } 00:28:20.446 ] 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "subsystem": "vmd", 00:28:20.446 "config": [] 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "subsystem": "accel", 00:28:20.446 "config": [ 00:28:20.446 { 00:28:20.446 "method": "accel_set_options", 00:28:20.446 "params": { 00:28:20.446 "small_cache_size": 128, 00:28:20.446 "large_cache_size": 16, 00:28:20.446 "task_count": 2048, 00:28:20.446 "sequence_count": 2048, 00:28:20.446 "buf_count": 2048 00:28:20.446 } 00:28:20.446 } 00:28:20.446 ] 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "subsystem": "bdev", 00:28:20.446 "config": [ 00:28:20.446 { 00:28:20.446 "method": "bdev_set_options", 00:28:20.446 "params": { 00:28:20.446 "bdev_io_pool_size": 65535, 00:28:20.446 "bdev_io_cache_size": 256, 00:28:20.446 "bdev_auto_examine": true, 00:28:20.446 "iobuf_small_cache_size": 128, 00:28:20.446 "iobuf_large_cache_size": 16 00:28:20.446 } 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "method": "bdev_raid_set_options", 00:28:20.446 "params": { 00:28:20.446 "process_window_size_kb": 1024, 00:28:20.446 "process_max_bandwidth_mb_sec": 0 00:28:20.446 } 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "method": "bdev_iscsi_set_options", 00:28:20.446 "params": { 00:28:20.446 "timeout_sec": 30 00:28:20.446 } 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "method": "bdev_nvme_set_options", 00:28:20.446 "params": { 00:28:20.446 "action_on_timeout": "none", 00:28:20.446 "timeout_us": 0, 00:28:20.446 "timeout_admin_us": 0, 00:28:20.446 "keep_alive_timeout_ms": 10000, 00:28:20.446 "arbitration_burst": 0, 00:28:20.446 "low_priority_weight": 0, 00:28:20.446 "medium_priority_weight": 0, 00:28:20.446 "high_priority_weight": 0, 00:28:20.446 "nvme_adminq_poll_period_us": 10000, 00:28:20.446 "nvme_ioq_poll_period_us": 0, 00:28:20.446 "io_queue_requests": 0, 00:28:20.446 "delay_cmd_submit": true, 00:28:20.446 "transport_retry_count": 4, 00:28:20.446 "bdev_retry_count": 3, 00:28:20.446 "transport_ack_timeout": 0, 00:28:20.446 "ctrlr_loss_timeout_sec": 0, 00:28:20.446 "reconnect_delay_sec": 0, 00:28:20.446 "fast_io_fail_timeout_sec": 0, 00:28:20.446 "disable_auto_failback": false, 00:28:20.446 "generate_uuids": false, 00:28:20.446 "transport_tos": 0, 00:28:20.446 "nvme_error_stat": false, 00:28:20.446 "rdma_srq_size": 0, 00:28:20.446 "io_path_stat": false, 00:28:20.446 "allow_accel_sequence": false, 00:28:20.446 "rdma_max_cq_size": 0, 00:28:20.446 "rdma_cm_event_timeout_ms": 0, 00:28:20.446 "dhchap_digests": [ 00:28:20.446 "sha256", 00:28:20.446 "sha384", 00:28:20.446 "sha512" 00:28:20.446 ], 00:28:20.446 "dhchap_dhgroups": [ 00:28:20.446 "null", 00:28:20.446 "ffdhe2048", 00:28:20.446 "ffdhe3072", 00:28:20.446 "ffdhe4096", 00:28:20.446 "ffdhe6144", 00:28:20.446 "ffdhe8192" 00:28:20.446 ] 00:28:20.446 } 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "method": "bdev_nvme_set_hotplug", 00:28:20.446 "params": { 00:28:20.446 "period_us": 100000, 00:28:20.446 "enable": false 00:28:20.446 } 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "method": "bdev_wait_for_examine" 00:28:20.446 } 00:28:20.446 ] 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "subsystem": "scsi", 00:28:20.446 "config": null 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "subsystem": "scheduler", 00:28:20.446 "config": [ 00:28:20.446 { 00:28:20.446 "method": "framework_set_scheduler", 00:28:20.446 "params": { 00:28:20.446 "name": "static" 00:28:20.446 } 00:28:20.446 } 00:28:20.446 ] 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "subsystem": "vhost_scsi", 00:28:20.446 "config": [] 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "subsystem": "vhost_blk", 00:28:20.446 "config": [] 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "subsystem": "ublk", 00:28:20.446 "config": [] 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "subsystem": "nbd", 00:28:20.446 "config": [] 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "subsystem": "nvmf", 00:28:20.446 "config": [ 00:28:20.446 { 00:28:20.446 "method": "nvmf_set_config", 00:28:20.446 "params": { 00:28:20.446 "discovery_filter": "match_any", 00:28:20.446 "admin_cmd_passthru": { 00:28:20.446 "identify_ctrlr": false 00:28:20.446 }, 00:28:20.446 "dhchap_digests": [ 00:28:20.446 "sha256", 00:28:20.446 "sha384", 00:28:20.446 "sha512" 00:28:20.446 ], 00:28:20.446 "dhchap_dhgroups": [ 00:28:20.446 "null", 00:28:20.446 "ffdhe2048", 00:28:20.446 "ffdhe3072", 00:28:20.446 "ffdhe4096", 00:28:20.446 "ffdhe6144", 00:28:20.446 "ffdhe8192" 00:28:20.446 ] 00:28:20.446 } 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "method": "nvmf_set_max_subsystems", 00:28:20.446 "params": { 00:28:20.446 "max_subsystems": 1024 00:28:20.446 } 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "method": "nvmf_set_crdt", 00:28:20.446 "params": { 00:28:20.446 "crdt1": 0, 00:28:20.446 "crdt2": 0, 00:28:20.446 "crdt3": 0 00:28:20.446 } 00:28:20.446 }, 00:28:20.446 { 00:28:20.446 "method": "nvmf_create_transport", 00:28:20.446 "params": { 00:28:20.447 "trtype": "TCP", 00:28:20.447 "max_queue_depth": 128, 00:28:20.447 "max_io_qpairs_per_ctrlr": 127, 00:28:20.447 "in_capsule_data_size": 4096, 00:28:20.447 "max_io_size": 131072, 00:28:20.447 "io_unit_size": 131072, 00:28:20.447 "max_aq_depth": 128, 00:28:20.447 "num_shared_buffers": 511, 00:28:20.447 "buf_cache_size": 4294967295, 00:28:20.447 "dif_insert_or_strip": false, 00:28:20.447 "zcopy": false, 00:28:20.447 "c2h_success": true, 00:28:20.447 "sock_priority": 0, 00:28:20.447 "abort_timeout_sec": 1, 00:28:20.447 "ack_timeout": 0, 00:28:20.447 "data_wr_pool_size": 0 00:28:20.447 } 00:28:20.447 } 00:28:20.447 ] 00:28:20.447 }, 00:28:20.447 { 00:28:20.447 "subsystem": "iscsi", 00:28:20.447 "config": [ 00:28:20.447 { 00:28:20.447 "method": "iscsi_set_options", 00:28:20.447 "params": { 00:28:20.447 "node_base": "iqn.2016-06.io.spdk", 00:28:20.447 "max_sessions": 128, 00:28:20.447 "max_connections_per_session": 2, 00:28:20.447 "max_queue_depth": 64, 00:28:20.447 "default_time2wait": 2, 00:28:20.447 "default_time2retain": 20, 00:28:20.447 "first_burst_length": 8192, 00:28:20.447 "immediate_data": true, 00:28:20.447 "allow_duplicated_isid": false, 00:28:20.447 "error_recovery_level": 0, 00:28:20.447 "nop_timeout": 60, 00:28:20.447 "nop_in_interval": 30, 00:28:20.447 "disable_chap": false, 00:28:20.447 "require_chap": false, 00:28:20.447 "mutual_chap": false, 00:28:20.447 "chap_group": 0, 00:28:20.447 "max_large_datain_per_connection": 64, 00:28:20.447 "max_r2t_per_connection": 4, 00:28:20.447 "pdu_pool_size": 36864, 00:28:20.447 "immediate_data_pool_size": 16384, 00:28:20.447 "data_out_pool_size": 2048 00:28:20.447 } 00:28:20.447 } 00:28:20.447 ] 00:28:20.447 } 00:28:20.447 ] 00:28:20.447 } 00:28:20.447 13:49:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:28:20.447 13:49:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58360 00:28:20.447 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58360 ']' 00:28:20.447 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58360 00:28:20.447 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:28:20.447 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.447 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58360 00:28:20.447 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:20.447 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:20.447 killing process with pid 58360 00:28:20.447 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58360' 00:28:20.447 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58360 00:28:20.447 13:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58360 00:28:23.752 13:49:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:28:23.752 13:49:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58416 00:28:23.752 13:49:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:28:29.038 13:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58416 00:28:29.038 13:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58416 ']' 00:28:29.038 13:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58416 00:28:29.038 13:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:28:29.038 13:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.038 13:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58416 00:28:29.038 13:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:29.038 13:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:29.038 killing process with pid 58416 00:28:29.038 13:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58416' 00:28:29.038 13:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58416 00:28:29.038 13:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58416 00:28:30.943 13:49:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:28:30.943 13:49:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:28:30.943 00:28:30.943 real 0m12.295s 00:28:30.943 user 0m11.745s 00:28:30.943 sys 0m1.002s 00:28:30.943 13:49:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.943 13:49:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:28:30.943 ************************************ 00:28:30.943 END TEST skip_rpc_with_json 00:28:30.943 ************************************ 00:28:30.943 13:49:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:28:30.943 13:49:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:30.943 13:49:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:30.943 13:49:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:30.943 ************************************ 00:28:30.943 START TEST skip_rpc_with_delay 00:28:30.943 ************************************ 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:28:31.203 [2024-11-20 13:49:28.412298] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:31.203 00:28:31.203 real 0m0.220s 00:28:31.203 user 0m0.111s 00:28:31.203 sys 0m0.106s 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:31.203 13:49:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:28:31.203 ************************************ 00:28:31.203 END TEST skip_rpc_with_delay 00:28:31.203 ************************************ 00:28:31.462 13:49:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:28:31.462 13:49:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:28:31.462 13:49:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:28:31.462 13:49:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:31.462 13:49:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:31.462 13:49:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:31.462 ************************************ 00:28:31.462 START TEST exit_on_failed_rpc_init 00:28:31.462 ************************************ 00:28:31.462 13:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:28:31.462 13:49:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58555 00:28:31.462 13:49:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58555 00:28:31.462 13:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58555 ']' 00:28:31.462 13:49:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:31.462 13:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.462 13:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:31.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.462 13:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.462 13:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:31.462 13:49:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.462 [2024-11-20 13:49:28.684754] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:28:31.462 [2024-11-20 13:49:28.684943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58555 ] 00:28:31.721 [2024-11-20 13:49:28.887593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.980 [2024-11-20 13:49:29.061010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:28:33.025 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:28:33.025 [2024-11-20 13:49:30.228712] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:28:33.025 [2024-11-20 13:49:30.228963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58584 ] 00:28:33.283 [2024-11-20 13:49:30.425618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.283 [2024-11-20 13:49:30.557562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.283 [2024-11-20 13:49:30.557669] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:33.283 [2024-11-20 13:49:30.557686] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:33.283 [2024-11-20 13:49:30.557706] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:33.543 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:28:33.543 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:33.543 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:28:33.543 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:28:33.543 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:28:33.543 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:33.543 13:49:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:33.543 13:49:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58555 00:28:33.543 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58555 ']' 00:28:33.543 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58555 00:28:33.543 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:28:33.543 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.543 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58555 00:28:33.801 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:33.801 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:33.801 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58555' 00:28:33.801 killing process with pid 58555 00:28:33.801 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58555 00:28:33.801 13:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58555 00:28:36.332 00:28:36.332 real 0m5.002s 00:28:36.332 user 0m5.393s 00:28:36.332 sys 0m0.679s 00:28:36.332 13:49:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.332 13:49:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:28:36.332 ************************************ 00:28:36.332 END TEST exit_on_failed_rpc_init 00:28:36.332 ************************************ 00:28:36.332 13:49:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:28:36.332 00:28:36.332 real 0m25.669s 00:28:36.332 user 0m24.597s 00:28:36.332 sys 0m2.493s 00:28:36.332 13:49:33 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.332 13:49:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:36.332 ************************************ 00:28:36.332 END TEST skip_rpc 00:28:36.332 ************************************ 00:28:36.332 13:49:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:28:36.332 13:49:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:36.332 13:49:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.332 13:49:33 -- common/autotest_common.sh@10 -- # set +x 00:28:36.590 ************************************ 00:28:36.590 START TEST rpc_client 00:28:36.590 ************************************ 00:28:36.590 13:49:33 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:28:36.590 * Looking for test storage... 00:28:36.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:28:36.590 13:49:33 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:36.590 13:49:33 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:28:36.590 13:49:33 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:36.590 13:49:33 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@345 -- # : 1 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@353 -- # local d=1 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@355 -- # echo 1 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@353 -- # local d=2 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@355 -- # echo 2 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:36.590 13:49:33 rpc_client -- scripts/common.sh@368 -- # return 0 00:28:36.590 13:49:33 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:36.590 13:49:33 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:36.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.590 --rc genhtml_branch_coverage=1 00:28:36.590 --rc genhtml_function_coverage=1 00:28:36.590 --rc genhtml_legend=1 00:28:36.590 --rc geninfo_all_blocks=1 00:28:36.590 --rc geninfo_unexecuted_blocks=1 00:28:36.590 00:28:36.590 ' 00:28:36.590 13:49:33 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:36.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.590 --rc genhtml_branch_coverage=1 00:28:36.590 --rc genhtml_function_coverage=1 00:28:36.590 --rc genhtml_legend=1 00:28:36.590 --rc geninfo_all_blocks=1 00:28:36.590 --rc geninfo_unexecuted_blocks=1 00:28:36.590 00:28:36.590 ' 00:28:36.590 13:49:33 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:36.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.590 --rc genhtml_branch_coverage=1 00:28:36.590 --rc genhtml_function_coverage=1 00:28:36.590 --rc genhtml_legend=1 00:28:36.590 --rc geninfo_all_blocks=1 00:28:36.590 --rc geninfo_unexecuted_blocks=1 00:28:36.590 00:28:36.590 ' 00:28:36.590 13:49:33 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:36.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.590 --rc genhtml_branch_coverage=1 00:28:36.590 --rc genhtml_function_coverage=1 00:28:36.590 --rc genhtml_legend=1 00:28:36.590 --rc geninfo_all_blocks=1 00:28:36.590 --rc geninfo_unexecuted_blocks=1 00:28:36.590 00:28:36.590 ' 00:28:36.591 13:49:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:28:36.591 OK 00:28:36.848 13:49:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:28:36.848 00:28:36.848 real 0m0.266s 00:28:36.848 user 0m0.156s 00:28:36.848 sys 0m0.123s 00:28:36.848 13:49:33 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.848 ************************************ 00:28:36.848 END TEST rpc_client 00:28:36.848 ************************************ 00:28:36.848 13:49:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:28:36.848 13:49:33 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:28:36.848 13:49:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:36.848 13:49:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.848 13:49:33 -- common/autotest_common.sh@10 -- # set +x 00:28:36.848 ************************************ 00:28:36.848 START TEST json_config 00:28:36.848 ************************************ 00:28:36.848 13:49:33 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:28:36.848 13:49:34 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:36.848 13:49:34 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:28:36.848 13:49:34 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:36.848 13:49:34 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:36.848 13:49:34 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:36.848 13:49:34 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:36.848 13:49:34 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:36.848 13:49:34 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:28:36.848 13:49:34 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:28:36.848 13:49:34 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:28:36.848 13:49:34 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:28:36.848 13:49:34 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:28:36.848 13:49:34 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:28:36.848 13:49:34 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:28:36.848 13:49:34 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:36.848 13:49:34 json_config -- scripts/common.sh@344 -- # case "$op" in 00:28:36.848 13:49:34 json_config -- scripts/common.sh@345 -- # : 1 00:28:36.848 13:49:34 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:36.848 13:49:34 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:36.848 13:49:34 json_config -- scripts/common.sh@365 -- # decimal 1 00:28:36.848 13:49:34 json_config -- scripts/common.sh@353 -- # local d=1 00:28:36.848 13:49:34 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:36.848 13:49:34 json_config -- scripts/common.sh@355 -- # echo 1 00:28:36.849 13:49:34 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:28:36.849 13:49:34 json_config -- scripts/common.sh@366 -- # decimal 2 00:28:36.849 13:49:34 json_config -- scripts/common.sh@353 -- # local d=2 00:28:36.849 13:49:34 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:36.849 13:49:34 json_config -- scripts/common.sh@355 -- # echo 2 00:28:36.849 13:49:34 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:28:36.849 13:49:34 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:36.849 13:49:34 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:36.849 13:49:34 json_config -- scripts/common.sh@368 -- # return 0 00:28:36.849 13:49:34 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:36.849 13:49:34 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:36.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.849 --rc genhtml_branch_coverage=1 00:28:36.849 --rc genhtml_function_coverage=1 00:28:36.849 --rc genhtml_legend=1 00:28:36.849 --rc geninfo_all_blocks=1 00:28:36.849 --rc geninfo_unexecuted_blocks=1 00:28:36.849 00:28:36.849 ' 00:28:36.849 13:49:34 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:36.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.849 --rc genhtml_branch_coverage=1 00:28:36.849 --rc genhtml_function_coverage=1 00:28:36.849 --rc genhtml_legend=1 00:28:36.849 --rc geninfo_all_blocks=1 00:28:36.849 --rc geninfo_unexecuted_blocks=1 00:28:36.849 00:28:36.849 ' 00:28:36.849 13:49:34 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:36.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.849 --rc genhtml_branch_coverage=1 00:28:36.849 --rc genhtml_function_coverage=1 00:28:36.849 --rc genhtml_legend=1 00:28:36.849 --rc geninfo_all_blocks=1 00:28:36.849 --rc geninfo_unexecuted_blocks=1 00:28:36.849 00:28:36.849 ' 00:28:36.849 13:49:34 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:36.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.849 --rc genhtml_branch_coverage=1 00:28:36.849 --rc genhtml_function_coverage=1 00:28:36.849 --rc genhtml_legend=1 00:28:36.849 --rc geninfo_all_blocks=1 00:28:36.849 --rc geninfo_unexecuted_blocks=1 00:28:36.849 00:28:36.849 ' 00:28:36.849 13:49:34 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97392525-1e58-4e74-9818-6fb5e8322d2f 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=97392525-1e58-4e74-9818-6fb5e8322d2f 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:36.849 13:49:34 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:36.849 13:49:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:28:36.849 13:49:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.849 13:49:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.849 13:49:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.849 13:49:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.849 13:49:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.849 13:49:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.849 13:49:34 json_config -- paths/export.sh@5 -- # export PATH 00:28:37.108 13:49:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.108 13:49:34 json_config -- nvmf/common.sh@51 -- # : 0 00:28:37.108 13:49:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.108 13:49:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.108 13:49:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.108 13:49:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.108 13:49:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.108 13:49:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:37.108 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:37.108 13:49:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.108 13:49:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.108 13:49:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.108 13:49:34 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:28:37.108 13:49:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:28:37.108 13:49:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:28:37.108 13:49:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:28:37.108 13:49:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:28:37.108 13:49:34 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:28:37.108 WARNING: No tests are enabled so not running JSON configuration tests 00:28:37.108 13:49:34 json_config -- json_config/json_config.sh@28 -- # exit 0 00:28:37.108 00:28:37.108 real 0m0.205s 00:28:37.108 user 0m0.135s 00:28:37.108 sys 0m0.078s 00:28:37.108 13:49:34 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.108 13:49:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:37.108 ************************************ 00:28:37.108 END TEST json_config 00:28:37.108 ************************************ 00:28:37.108 13:49:34 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:28:37.108 13:49:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:37.108 13:49:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.108 13:49:34 -- common/autotest_common.sh@10 -- # set +x 00:28:37.108 ************************************ 00:28:37.108 START TEST json_config_extra_key 00:28:37.108 ************************************ 00:28:37.108 13:49:34 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:28:37.108 13:49:34 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:37.108 13:49:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:28:37.108 13:49:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:37.108 13:49:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:37.108 13:49:34 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.108 13:49:34 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.108 13:49:34 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.108 13:49:34 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.108 13:49:34 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.366 13:49:34 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:28:37.366 13:49:34 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.366 13:49:34 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:37.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.366 --rc genhtml_branch_coverage=1 00:28:37.366 --rc genhtml_function_coverage=1 00:28:37.366 --rc genhtml_legend=1 00:28:37.366 --rc geninfo_all_blocks=1 00:28:37.366 --rc geninfo_unexecuted_blocks=1 00:28:37.366 00:28:37.366 ' 00:28:37.366 13:49:34 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:37.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.366 --rc genhtml_branch_coverage=1 00:28:37.366 --rc genhtml_function_coverage=1 00:28:37.366 --rc genhtml_legend=1 00:28:37.366 --rc geninfo_all_blocks=1 00:28:37.366 --rc geninfo_unexecuted_blocks=1 00:28:37.366 00:28:37.366 ' 00:28:37.366 13:49:34 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:37.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.366 --rc genhtml_branch_coverage=1 00:28:37.366 --rc genhtml_function_coverage=1 00:28:37.366 --rc genhtml_legend=1 00:28:37.366 --rc geninfo_all_blocks=1 00:28:37.366 --rc geninfo_unexecuted_blocks=1 00:28:37.366 00:28:37.366 ' 00:28:37.366 13:49:34 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:37.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.366 --rc genhtml_branch_coverage=1 00:28:37.366 --rc genhtml_function_coverage=1 00:28:37.366 --rc genhtml_legend=1 00:28:37.366 --rc geninfo_all_blocks=1 00:28:37.367 --rc geninfo_unexecuted_blocks=1 00:28:37.367 00:28:37.367 ' 00:28:37.367 13:49:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97392525-1e58-4e74-9818-6fb5e8322d2f 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=97392525-1e58-4e74-9818-6fb5e8322d2f 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:37.367 13:49:34 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.367 13:49:34 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.367 13:49:34 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.367 13:49:34 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.367 13:49:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.367 13:49:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.367 13:49:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.367 13:49:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:28:37.367 13:49:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:37.367 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.367 13:49:34 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.367 13:49:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:28:37.367 13:49:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:28:37.367 13:49:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:28:37.367 13:49:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:28:37.367 13:49:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:28:37.367 13:49:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:28:37.367 13:49:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:28:37.367 13:49:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:28:37.367 13:49:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:28:37.367 13:49:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:28:37.367 INFO: launching applications... 00:28:37.367 13:49:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:28:37.367 13:49:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:28:37.367 13:49:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:28:37.367 13:49:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:28:37.367 13:49:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:28:37.367 13:49:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:28:37.367 13:49:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:28:37.367 13:49:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:28:37.367 13:49:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:28:37.367 13:49:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58794 00:28:37.367 Waiting for target to run... 00:28:37.367 13:49:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:28:37.367 13:49:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58794 /var/tmp/spdk_tgt.sock 00:28:37.367 13:49:34 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58794 ']' 00:28:37.367 13:49:34 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:28:37.367 13:49:34 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:28:37.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:28:37.367 13:49:34 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.367 13:49:34 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:28:37.367 13:49:34 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.367 13:49:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:28:37.367 [2024-11-20 13:49:34.621078] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:28:37.367 [2024-11-20 13:49:34.621274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58794 ] 00:28:37.934 [2024-11-20 13:49:35.064221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.934 [2024-11-20 13:49:35.234496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.868 13:49:36 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.868 13:49:36 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:28:38.868 00:28:38.868 13:49:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:28:38.868 INFO: shutting down applications... 00:28:38.868 13:49:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:28:38.868 13:49:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:28:38.868 13:49:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:28:38.868 13:49:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:28:38.868 13:49:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58794 ]] 00:28:38.868 13:49:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58794 00:28:38.868 13:49:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:28:38.868 13:49:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:28:38.868 13:49:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58794 00:28:38.868 13:49:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:28:39.433 13:49:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:28:39.433 13:49:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:28:39.433 13:49:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58794 00:28:39.433 13:49:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:28:40.073 13:49:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:28:40.073 13:49:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:28:40.073 13:49:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58794 00:28:40.073 13:49:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:28:40.331 13:49:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:28:40.331 13:49:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:28:40.331 13:49:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58794 00:28:40.331 13:49:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:28:40.897 13:49:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:28:40.897 13:49:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:28:40.897 13:49:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58794 00:28:40.897 13:49:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:28:41.465 13:49:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:28:41.465 13:49:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:28:41.465 13:49:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58794 00:28:41.465 13:49:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:28:42.029 13:49:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:28:42.029 13:49:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:28:42.029 13:49:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58794 00:28:42.029 13:49:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:28:42.595 13:49:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:28:42.595 13:49:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:28:42.595 13:49:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58794 00:28:42.595 13:49:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:28:42.595 13:49:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:28:42.595 13:49:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:28:42.595 13:49:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:28:42.595 SPDK target shutdown done 00:28:42.595 Success 00:28:42.595 13:49:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:28:42.595 00:28:42.595 real 0m5.420s 00:28:42.595 user 0m4.860s 00:28:42.595 sys 0m0.701s 00:28:42.595 13:49:39 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:42.595 13:49:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:28:42.595 ************************************ 00:28:42.595 END TEST json_config_extra_key 00:28:42.595 ************************************ 00:28:42.595 13:49:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:28:42.595 13:49:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:42.595 13:49:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.595 13:49:39 -- common/autotest_common.sh@10 -- # set +x 00:28:42.595 ************************************ 00:28:42.595 START TEST alias_rpc 00:28:42.596 ************************************ 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:28:42.596 * Looking for test storage... 00:28:42.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:42.596 13:49:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:42.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.596 --rc genhtml_branch_coverage=1 00:28:42.596 --rc genhtml_function_coverage=1 00:28:42.596 --rc genhtml_legend=1 00:28:42.596 --rc geninfo_all_blocks=1 00:28:42.596 --rc geninfo_unexecuted_blocks=1 00:28:42.596 00:28:42.596 ' 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:42.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.596 --rc genhtml_branch_coverage=1 00:28:42.596 --rc genhtml_function_coverage=1 00:28:42.596 --rc genhtml_legend=1 00:28:42.596 --rc geninfo_all_blocks=1 00:28:42.596 --rc geninfo_unexecuted_blocks=1 00:28:42.596 00:28:42.596 ' 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:42.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.596 --rc genhtml_branch_coverage=1 00:28:42.596 --rc genhtml_function_coverage=1 00:28:42.596 --rc genhtml_legend=1 00:28:42.596 --rc geninfo_all_blocks=1 00:28:42.596 --rc geninfo_unexecuted_blocks=1 00:28:42.596 00:28:42.596 ' 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:42.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.596 --rc genhtml_branch_coverage=1 00:28:42.596 --rc genhtml_function_coverage=1 00:28:42.596 --rc genhtml_legend=1 00:28:42.596 --rc geninfo_all_blocks=1 00:28:42.596 --rc geninfo_unexecuted_blocks=1 00:28:42.596 00:28:42.596 ' 00:28:42.596 13:49:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:28:42.596 13:49:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58917 00:28:42.596 13:49:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58917 00:28:42.596 13:49:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58917 ']' 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.596 13:49:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:42.855 [2024-11-20 13:49:40.032837] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:28:42.855 [2024-11-20 13:49:40.033104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58917 ] 00:28:43.113 [2024-11-20 13:49:40.246721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.113 [2024-11-20 13:49:40.374676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.106 13:49:41 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:44.106 13:49:41 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:44.106 13:49:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:28:44.367 13:49:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58917 00:28:44.367 13:49:41 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58917 ']' 00:28:44.367 13:49:41 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58917 00:28:44.367 13:49:41 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:28:44.367 13:49:41 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:44.367 13:49:41 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58917 00:28:44.626 13:49:41 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:44.626 13:49:41 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:44.626 killing process with pid 58917 00:28:44.626 13:49:41 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58917' 00:28:44.626 13:49:41 alias_rpc -- common/autotest_common.sh@973 -- # kill 58917 00:28:44.626 13:49:41 alias_rpc -- common/autotest_common.sh@978 -- # wait 58917 00:28:47.159 00:28:47.159 real 0m4.708s 00:28:47.159 user 0m4.740s 00:28:47.159 sys 0m0.654s 00:28:47.159 13:49:44 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.159 13:49:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:47.159 ************************************ 00:28:47.159 END TEST alias_rpc 00:28:47.159 ************************************ 00:28:47.159 13:49:44 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:28:47.159 13:49:44 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:28:47.159 13:49:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:47.159 13:49:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.159 13:49:44 -- common/autotest_common.sh@10 -- # set +x 00:28:47.159 ************************************ 00:28:47.159 START TEST spdkcli_tcp 00:28:47.159 ************************************ 00:28:47.159 13:49:44 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:28:47.418 * Looking for test storage... 00:28:47.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:47.418 13:49:44 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:47.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.418 --rc genhtml_branch_coverage=1 00:28:47.418 --rc genhtml_function_coverage=1 00:28:47.418 --rc genhtml_legend=1 00:28:47.418 --rc geninfo_all_blocks=1 00:28:47.418 --rc geninfo_unexecuted_blocks=1 00:28:47.418 00:28:47.418 ' 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:47.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.418 --rc genhtml_branch_coverage=1 00:28:47.418 --rc genhtml_function_coverage=1 00:28:47.418 --rc genhtml_legend=1 00:28:47.418 --rc geninfo_all_blocks=1 00:28:47.418 --rc geninfo_unexecuted_blocks=1 00:28:47.418 00:28:47.418 ' 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:47.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.418 --rc genhtml_branch_coverage=1 00:28:47.418 --rc genhtml_function_coverage=1 00:28:47.418 --rc genhtml_legend=1 00:28:47.418 --rc geninfo_all_blocks=1 00:28:47.418 --rc geninfo_unexecuted_blocks=1 00:28:47.418 00:28:47.418 ' 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:47.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.418 --rc genhtml_branch_coverage=1 00:28:47.418 --rc genhtml_function_coverage=1 00:28:47.418 --rc genhtml_legend=1 00:28:47.418 --rc geninfo_all_blocks=1 00:28:47.418 --rc geninfo_unexecuted_blocks=1 00:28:47.418 00:28:47.418 ' 00:28:47.418 13:49:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:47.418 13:49:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:47.418 13:49:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:47.418 13:49:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:28:47.418 13:49:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:28:47.418 13:49:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:47.418 13:49:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:47.418 13:49:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59030 00:28:47.418 13:49:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59030 00:28:47.418 13:49:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59030 ']' 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.418 13:49:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:47.677 [2024-11-20 13:49:44.798965] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:28:47.677 [2024-11-20 13:49:44.799101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59030 ] 00:28:47.677 [2024-11-20 13:49:44.986735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:47.963 [2024-11-20 13:49:45.162814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.963 [2024-11-20 13:49:45.162847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.983 13:49:46 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.983 13:49:46 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:28:48.983 13:49:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59053 00:28:48.983 13:49:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:28:48.983 13:49:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:28:49.241 [ 00:28:49.241 "bdev_malloc_delete", 00:28:49.241 "bdev_malloc_create", 00:28:49.241 "bdev_null_resize", 00:28:49.241 "bdev_null_delete", 00:28:49.241 "bdev_null_create", 00:28:49.241 "bdev_nvme_cuse_unregister", 00:28:49.241 "bdev_nvme_cuse_register", 00:28:49.241 "bdev_opal_new_user", 00:28:49.241 "bdev_opal_set_lock_state", 00:28:49.241 "bdev_opal_delete", 00:28:49.241 "bdev_opal_get_info", 00:28:49.241 "bdev_opal_create", 00:28:49.241 "bdev_nvme_opal_revert", 00:28:49.241 "bdev_nvme_opal_init", 00:28:49.241 "bdev_nvme_send_cmd", 00:28:49.241 "bdev_nvme_set_keys", 00:28:49.241 "bdev_nvme_get_path_iostat", 00:28:49.241 "bdev_nvme_get_mdns_discovery_info", 00:28:49.241 "bdev_nvme_stop_mdns_discovery", 00:28:49.241 "bdev_nvme_start_mdns_discovery", 00:28:49.241 "bdev_nvme_set_multipath_policy", 00:28:49.241 "bdev_nvme_set_preferred_path", 00:28:49.241 "bdev_nvme_get_io_paths", 00:28:49.241 "bdev_nvme_remove_error_injection", 00:28:49.241 "bdev_nvme_add_error_injection", 00:28:49.241 "bdev_nvme_get_discovery_info", 00:28:49.241 "bdev_nvme_stop_discovery", 00:28:49.241 "bdev_nvme_start_discovery", 00:28:49.241 "bdev_nvme_get_controller_health_info", 00:28:49.241 "bdev_nvme_disable_controller", 00:28:49.241 "bdev_nvme_enable_controller", 00:28:49.241 "bdev_nvme_reset_controller", 00:28:49.241 "bdev_nvme_get_transport_statistics", 00:28:49.241 "bdev_nvme_apply_firmware", 00:28:49.241 "bdev_nvme_detach_controller", 00:28:49.241 "bdev_nvme_get_controllers", 00:28:49.241 "bdev_nvme_attach_controller", 00:28:49.241 "bdev_nvme_set_hotplug", 00:28:49.241 "bdev_nvme_set_options", 00:28:49.241 "bdev_passthru_delete", 00:28:49.241 "bdev_passthru_create", 00:28:49.241 "bdev_lvol_set_parent_bdev", 00:28:49.241 "bdev_lvol_set_parent", 00:28:49.241 "bdev_lvol_check_shallow_copy", 00:28:49.241 "bdev_lvol_start_shallow_copy", 00:28:49.241 "bdev_lvol_grow_lvstore", 00:28:49.241 "bdev_lvol_get_lvols", 00:28:49.241 "bdev_lvol_get_lvstores", 00:28:49.241 "bdev_lvol_delete", 00:28:49.241 "bdev_lvol_set_read_only", 00:28:49.241 "bdev_lvol_resize", 00:28:49.241 "bdev_lvol_decouple_parent", 00:28:49.241 "bdev_lvol_inflate", 00:28:49.241 "bdev_lvol_rename", 00:28:49.241 "bdev_lvol_clone_bdev", 00:28:49.241 "bdev_lvol_clone", 00:28:49.241 "bdev_lvol_snapshot", 00:28:49.241 "bdev_lvol_create", 00:28:49.241 "bdev_lvol_delete_lvstore", 00:28:49.241 "bdev_lvol_rename_lvstore", 00:28:49.241 "bdev_lvol_create_lvstore", 00:28:49.241 "bdev_raid_set_options", 00:28:49.241 "bdev_raid_remove_base_bdev", 00:28:49.241 "bdev_raid_add_base_bdev", 00:28:49.241 "bdev_raid_delete", 00:28:49.241 "bdev_raid_create", 00:28:49.241 "bdev_raid_get_bdevs", 00:28:49.241 "bdev_error_inject_error", 00:28:49.241 "bdev_error_delete", 00:28:49.241 "bdev_error_create", 00:28:49.241 "bdev_split_delete", 00:28:49.241 "bdev_split_create", 00:28:49.241 "bdev_delay_delete", 00:28:49.241 "bdev_delay_create", 00:28:49.241 "bdev_delay_update_latency", 00:28:49.241 "bdev_zone_block_delete", 00:28:49.241 "bdev_zone_block_create", 00:28:49.241 "blobfs_create", 00:28:49.241 "blobfs_detect", 00:28:49.241 "blobfs_set_cache_size", 00:28:49.241 "bdev_xnvme_delete", 00:28:49.241 "bdev_xnvme_create", 00:28:49.241 "bdev_aio_delete", 00:28:49.241 "bdev_aio_rescan", 00:28:49.241 "bdev_aio_create", 00:28:49.241 "bdev_ftl_set_property", 00:28:49.241 "bdev_ftl_get_properties", 00:28:49.241 "bdev_ftl_get_stats", 00:28:49.241 "bdev_ftl_unmap", 00:28:49.241 "bdev_ftl_unload", 00:28:49.241 "bdev_ftl_delete", 00:28:49.241 "bdev_ftl_load", 00:28:49.241 "bdev_ftl_create", 00:28:49.241 "bdev_virtio_attach_controller", 00:28:49.241 "bdev_virtio_scsi_get_devices", 00:28:49.241 "bdev_virtio_detach_controller", 00:28:49.241 "bdev_virtio_blk_set_hotplug", 00:28:49.241 "bdev_iscsi_delete", 00:28:49.241 "bdev_iscsi_create", 00:28:49.241 "bdev_iscsi_set_options", 00:28:49.241 "accel_error_inject_error", 00:28:49.241 "ioat_scan_accel_module", 00:28:49.241 "dsa_scan_accel_module", 00:28:49.241 "iaa_scan_accel_module", 00:28:49.241 "keyring_file_remove_key", 00:28:49.241 "keyring_file_add_key", 00:28:49.242 "keyring_linux_set_options", 00:28:49.242 "fsdev_aio_delete", 00:28:49.242 "fsdev_aio_create", 00:28:49.242 "iscsi_get_histogram", 00:28:49.242 "iscsi_enable_histogram", 00:28:49.242 "iscsi_set_options", 00:28:49.242 "iscsi_get_auth_groups", 00:28:49.242 "iscsi_auth_group_remove_secret", 00:28:49.242 "iscsi_auth_group_add_secret", 00:28:49.242 "iscsi_delete_auth_group", 00:28:49.242 "iscsi_create_auth_group", 00:28:49.242 "iscsi_set_discovery_auth", 00:28:49.242 "iscsi_get_options", 00:28:49.242 "iscsi_target_node_request_logout", 00:28:49.242 "iscsi_target_node_set_redirect", 00:28:49.242 "iscsi_target_node_set_auth", 00:28:49.242 "iscsi_target_node_add_lun", 00:28:49.242 "iscsi_get_stats", 00:28:49.242 "iscsi_get_connections", 00:28:49.242 "iscsi_portal_group_set_auth", 00:28:49.242 "iscsi_start_portal_group", 00:28:49.242 "iscsi_delete_portal_group", 00:28:49.242 "iscsi_create_portal_group", 00:28:49.242 "iscsi_get_portal_groups", 00:28:49.242 "iscsi_delete_target_node", 00:28:49.242 "iscsi_target_node_remove_pg_ig_maps", 00:28:49.242 "iscsi_target_node_add_pg_ig_maps", 00:28:49.242 "iscsi_create_target_node", 00:28:49.242 "iscsi_get_target_nodes", 00:28:49.242 "iscsi_delete_initiator_group", 00:28:49.242 "iscsi_initiator_group_remove_initiators", 00:28:49.242 "iscsi_initiator_group_add_initiators", 00:28:49.242 "iscsi_create_initiator_group", 00:28:49.242 "iscsi_get_initiator_groups", 00:28:49.242 "nvmf_set_crdt", 00:28:49.242 "nvmf_set_config", 00:28:49.242 "nvmf_set_max_subsystems", 00:28:49.242 "nvmf_stop_mdns_prr", 00:28:49.242 "nvmf_publish_mdns_prr", 00:28:49.242 "nvmf_subsystem_get_listeners", 00:28:49.242 "nvmf_subsystem_get_qpairs", 00:28:49.242 "nvmf_subsystem_get_controllers", 00:28:49.242 "nvmf_get_stats", 00:28:49.242 "nvmf_get_transports", 00:28:49.242 "nvmf_create_transport", 00:28:49.242 "nvmf_get_targets", 00:28:49.242 "nvmf_delete_target", 00:28:49.242 "nvmf_create_target", 00:28:49.242 "nvmf_subsystem_allow_any_host", 00:28:49.242 "nvmf_subsystem_set_keys", 00:28:49.242 "nvmf_subsystem_remove_host", 00:28:49.242 "nvmf_subsystem_add_host", 00:28:49.242 "nvmf_ns_remove_host", 00:28:49.242 "nvmf_ns_add_host", 00:28:49.242 "nvmf_subsystem_remove_ns", 00:28:49.242 "nvmf_subsystem_set_ns_ana_group", 00:28:49.242 "nvmf_subsystem_add_ns", 00:28:49.242 "nvmf_subsystem_listener_set_ana_state", 00:28:49.242 "nvmf_discovery_get_referrals", 00:28:49.242 "nvmf_discovery_remove_referral", 00:28:49.242 "nvmf_discovery_add_referral", 00:28:49.242 "nvmf_subsystem_remove_listener", 00:28:49.242 "nvmf_subsystem_add_listener", 00:28:49.242 "nvmf_delete_subsystem", 00:28:49.242 "nvmf_create_subsystem", 00:28:49.242 "nvmf_get_subsystems", 00:28:49.242 "env_dpdk_get_mem_stats", 00:28:49.242 "nbd_get_disks", 00:28:49.242 "nbd_stop_disk", 00:28:49.242 "nbd_start_disk", 00:28:49.242 "ublk_recover_disk", 00:28:49.242 "ublk_get_disks", 00:28:49.242 "ublk_stop_disk", 00:28:49.242 "ublk_start_disk", 00:28:49.242 "ublk_destroy_target", 00:28:49.242 "ublk_create_target", 00:28:49.242 "virtio_blk_create_transport", 00:28:49.242 "virtio_blk_get_transports", 00:28:49.242 "vhost_controller_set_coalescing", 00:28:49.242 "vhost_get_controllers", 00:28:49.242 "vhost_delete_controller", 00:28:49.242 "vhost_create_blk_controller", 00:28:49.242 "vhost_scsi_controller_remove_target", 00:28:49.242 "vhost_scsi_controller_add_target", 00:28:49.242 "vhost_start_scsi_controller", 00:28:49.242 "vhost_create_scsi_controller", 00:28:49.242 "thread_set_cpumask", 00:28:49.242 "scheduler_set_options", 00:28:49.242 "framework_get_governor", 00:28:49.242 "framework_get_scheduler", 00:28:49.242 "framework_set_scheduler", 00:28:49.242 "framework_get_reactors", 00:28:49.242 "thread_get_io_channels", 00:28:49.242 "thread_get_pollers", 00:28:49.242 "thread_get_stats", 00:28:49.242 "framework_monitor_context_switch", 00:28:49.242 "spdk_kill_instance", 00:28:49.242 "log_enable_timestamps", 00:28:49.242 "log_get_flags", 00:28:49.242 "log_clear_flag", 00:28:49.242 "log_set_flag", 00:28:49.242 "log_get_level", 00:28:49.242 "log_set_level", 00:28:49.242 "log_get_print_level", 00:28:49.242 "log_set_print_level", 00:28:49.242 "framework_enable_cpumask_locks", 00:28:49.242 "framework_disable_cpumask_locks", 00:28:49.242 "framework_wait_init", 00:28:49.242 "framework_start_init", 00:28:49.242 "scsi_get_devices", 00:28:49.242 "bdev_get_histogram", 00:28:49.242 "bdev_enable_histogram", 00:28:49.242 "bdev_set_qos_limit", 00:28:49.242 "bdev_set_qd_sampling_period", 00:28:49.242 "bdev_get_bdevs", 00:28:49.242 "bdev_reset_iostat", 00:28:49.242 "bdev_get_iostat", 00:28:49.242 "bdev_examine", 00:28:49.242 "bdev_wait_for_examine", 00:28:49.242 "bdev_set_options", 00:28:49.242 "accel_get_stats", 00:28:49.242 "accel_set_options", 00:28:49.242 "accel_set_driver", 00:28:49.242 "accel_crypto_key_destroy", 00:28:49.242 "accel_crypto_keys_get", 00:28:49.242 "accel_crypto_key_create", 00:28:49.242 "accel_assign_opc", 00:28:49.242 "accel_get_module_info", 00:28:49.242 "accel_get_opc_assignments", 00:28:49.242 "vmd_rescan", 00:28:49.242 "vmd_remove_device", 00:28:49.242 "vmd_enable", 00:28:49.242 "sock_get_default_impl", 00:28:49.242 "sock_set_default_impl", 00:28:49.242 "sock_impl_set_options", 00:28:49.242 "sock_impl_get_options", 00:28:49.242 "iobuf_get_stats", 00:28:49.242 "iobuf_set_options", 00:28:49.242 "keyring_get_keys", 00:28:49.242 "framework_get_pci_devices", 00:28:49.242 "framework_get_config", 00:28:49.242 "framework_get_subsystems", 00:28:49.242 "fsdev_set_opts", 00:28:49.242 "fsdev_get_opts", 00:28:49.242 "trace_get_info", 00:28:49.242 "trace_get_tpoint_group_mask", 00:28:49.242 "trace_disable_tpoint_group", 00:28:49.242 "trace_enable_tpoint_group", 00:28:49.242 "trace_clear_tpoint_mask", 00:28:49.242 "trace_set_tpoint_mask", 00:28:49.242 "notify_get_notifications", 00:28:49.242 "notify_get_types", 00:28:49.242 "spdk_get_version", 00:28:49.242 "rpc_get_methods" 00:28:49.242 ] 00:28:49.242 13:49:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:28:49.242 13:49:46 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:49.242 13:49:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:49.242 13:49:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:28:49.242 13:49:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59030 00:28:49.242 13:49:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59030 ']' 00:28:49.242 13:49:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59030 00:28:49.242 13:49:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:28:49.242 13:49:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.242 13:49:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59030 00:28:49.242 13:49:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:49.242 13:49:46 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:49.242 killing process with pid 59030 00:28:49.242 13:49:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59030' 00:28:49.242 13:49:46 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59030 00:28:49.242 13:49:46 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59030 00:28:52.528 00:28:52.528 real 0m4.714s 00:28:52.528 user 0m8.679s 00:28:52.528 sys 0m0.677s 00:28:52.528 13:49:49 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.528 ************************************ 00:28:52.528 13:49:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:52.528 END TEST spdkcli_tcp 00:28:52.528 ************************************ 00:28:52.528 13:49:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:28:52.528 13:49:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:52.528 13:49:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:52.528 13:49:49 -- common/autotest_common.sh@10 -- # set +x 00:28:52.528 ************************************ 00:28:52.528 START TEST dpdk_mem_utility 00:28:52.528 ************************************ 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:28:52.528 * Looking for test storage... 00:28:52.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:52.528 13:49:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:52.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.528 --rc genhtml_branch_coverage=1 00:28:52.528 --rc genhtml_function_coverage=1 00:28:52.528 --rc genhtml_legend=1 00:28:52.528 --rc geninfo_all_blocks=1 00:28:52.528 --rc geninfo_unexecuted_blocks=1 00:28:52.528 00:28:52.528 ' 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:52.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.528 --rc genhtml_branch_coverage=1 00:28:52.528 --rc genhtml_function_coverage=1 00:28:52.528 --rc genhtml_legend=1 00:28:52.528 --rc geninfo_all_blocks=1 00:28:52.528 --rc geninfo_unexecuted_blocks=1 00:28:52.528 00:28:52.528 ' 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:52.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.528 --rc genhtml_branch_coverage=1 00:28:52.528 --rc genhtml_function_coverage=1 00:28:52.528 --rc genhtml_legend=1 00:28:52.528 --rc geninfo_all_blocks=1 00:28:52.528 --rc geninfo_unexecuted_blocks=1 00:28:52.528 00:28:52.528 ' 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:52.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.528 --rc genhtml_branch_coverage=1 00:28:52.528 --rc genhtml_function_coverage=1 00:28:52.528 --rc genhtml_legend=1 00:28:52.528 --rc geninfo_all_blocks=1 00:28:52.528 --rc geninfo_unexecuted_blocks=1 00:28:52.528 00:28:52.528 ' 00:28:52.528 13:49:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:28:52.528 13:49:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59158 00:28:52.528 13:49:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:52.528 13:49:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59158 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59158 ']' 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.528 13:49:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:28:52.528 [2024-11-20 13:49:49.597615] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:28:52.528 [2024-11-20 13:49:49.597796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59158 ] 00:28:52.528 [2024-11-20 13:49:49.793301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.787 [2024-11-20 13:49:49.913003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.726 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.726 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:28:53.726 13:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:28:53.726 13:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:28:53.726 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.726 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:28:53.726 { 00:28:53.726 "filename": "/tmp/spdk_mem_dump.txt" 00:28:53.726 } 00:28:53.726 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.726 13:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:28:53.726 DPDK memory size 824.000000 MiB in 1 heap(s) 00:28:53.726 1 heaps totaling size 824.000000 MiB 00:28:53.726 size: 824.000000 MiB heap id: 0 00:28:53.726 end heaps---------- 00:28:53.726 9 mempools totaling size 603.782043 MiB 00:28:53.726 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:28:53.726 size: 158.602051 MiB name: PDU_data_out_Pool 00:28:53.726 size: 100.555481 MiB name: bdev_io_59158 00:28:53.726 size: 50.003479 MiB name: msgpool_59158 00:28:53.726 size: 36.509338 MiB name: fsdev_io_59158 00:28:53.726 size: 21.763794 MiB name: PDU_Pool 00:28:53.726 size: 19.513306 MiB name: SCSI_TASK_Pool 00:28:53.726 size: 4.133484 MiB name: evtpool_59158 00:28:53.726 size: 0.026123 MiB name: Session_Pool 00:28:53.726 end mempools------- 00:28:53.726 6 memzones totaling size 4.142822 MiB 00:28:53.726 size: 1.000366 MiB name: RG_ring_0_59158 00:28:53.726 size: 1.000366 MiB name: RG_ring_1_59158 00:28:53.726 size: 1.000366 MiB name: RG_ring_4_59158 00:28:53.726 size: 1.000366 MiB name: RG_ring_5_59158 00:28:53.726 size: 0.125366 MiB name: RG_ring_2_59158 00:28:53.726 size: 0.015991 MiB name: RG_ring_3_59158 00:28:53.726 end memzones------- 00:28:53.726 13:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:28:53.726 heap id: 0 total size: 824.000000 MiB number of busy elements: 310 number of free elements: 18 00:28:53.726 list of free elements. size: 16.782593 MiB 00:28:53.726 element at address: 0x200006400000 with size: 1.995972 MiB 00:28:53.726 element at address: 0x20000a600000 with size: 1.995972 MiB 00:28:53.726 element at address: 0x200003e00000 with size: 1.991028 MiB 00:28:53.726 element at address: 0x200019500040 with size: 0.999939 MiB 00:28:53.726 element at address: 0x200019900040 with size: 0.999939 MiB 00:28:53.726 element at address: 0x200019a00000 with size: 0.999084 MiB 00:28:53.726 element at address: 0x200032600000 with size: 0.994324 MiB 00:28:53.726 element at address: 0x200000400000 with size: 0.992004 MiB 00:28:53.726 element at address: 0x200019200000 with size: 0.959656 MiB 00:28:53.726 element at address: 0x200019d00040 with size: 0.936401 MiB 00:28:53.726 element at address: 0x200000200000 with size: 0.716980 MiB 00:28:53.726 element at address: 0x20001b400000 with size: 0.564148 MiB 00:28:53.726 element at address: 0x200000c00000 with size: 0.489197 MiB 00:28:53.726 element at address: 0x200019600000 with size: 0.487976 MiB 00:28:53.726 element at address: 0x200019e00000 with size: 0.485413 MiB 00:28:53.726 element at address: 0x200012c00000 with size: 0.433228 MiB 00:28:53.726 element at address: 0x200028800000 with size: 0.390442 MiB 00:28:53.726 element at address: 0x200000800000 with size: 0.350891 MiB 00:28:53.726 list of standard malloc elements. size: 199.286499 MiB 00:28:53.726 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:28:53.726 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:28:53.726 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:28:53.726 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:28:53.726 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:28:53.726 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:28:53.726 element at address: 0x200019deff40 with size: 0.062683 MiB 00:28:53.726 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:28:53.726 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:28:53.726 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:28:53.726 element at address: 0x200012bff040 with size: 0.000305 MiB 00:28:53.726 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:28:53.726 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:28:53.727 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200000cff000 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012bff180 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012bff280 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012bff380 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012bff480 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012bff580 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012bff680 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012bff780 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012bff880 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012bff980 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200019affc40 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:28:53.727 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:28:53.728 element at address: 0x200028863f40 with size: 0.000244 MiB 00:28:53.728 element at address: 0x200028864040 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886af80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886b080 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886b180 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886b280 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886b380 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886b480 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886b580 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886b680 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886b780 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886b880 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886b980 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886be80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886c080 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886c180 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886c280 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886c380 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886c480 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886c580 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886c680 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886c780 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886c880 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886c980 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886d080 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886d180 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886d280 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886d380 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886d480 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886d580 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886d680 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886d780 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886d880 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886d980 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886da80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886db80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886de80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886df80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886e080 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886e180 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886e280 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886e380 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886e480 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886e580 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886e680 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886e780 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886e880 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886e980 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886f080 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886f180 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886f280 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886f380 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886f480 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886f580 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886f680 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886f780 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886f880 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886f980 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:28:53.728 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:28:53.728 list of memzone associated elements. size: 607.930908 MiB 00:28:53.728 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:28:53.728 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:28:53.728 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:28:53.728 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:28:53.728 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:28:53.728 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59158_0 00:28:53.728 element at address: 0x200000dff340 with size: 48.003113 MiB 00:28:53.728 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59158_0 00:28:53.728 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:28:53.728 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59158_0 00:28:53.728 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:28:53.728 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:28:53.728 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:28:53.728 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:28:53.728 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:28:53.728 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59158_0 00:28:53.728 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:28:53.728 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59158 00:28:53.728 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:28:53.728 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59158 00:28:53.728 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:28:53.728 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:28:53.728 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:28:53.728 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:28:53.728 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:28:53.728 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:28:53.728 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:28:53.728 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:28:53.728 element at address: 0x200000cff100 with size: 1.000549 MiB 00:28:53.728 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59158 00:28:53.729 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:28:53.729 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59158 00:28:53.729 element at address: 0x200019affd40 with size: 1.000549 MiB 00:28:53.729 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59158 00:28:53.729 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:28:53.729 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59158 00:28:53.729 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:28:53.729 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59158 00:28:53.729 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:28:53.729 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59158 00:28:53.729 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:28:53.729 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:28:53.729 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:28:53.729 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:28:53.729 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:28:53.729 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:28:53.729 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:28:53.729 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59158 00:28:53.729 element at address: 0x20000085df80 with size: 0.125549 MiB 00:28:53.729 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59158 00:28:53.729 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:28:53.729 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:28:53.729 element at address: 0x200028864140 with size: 0.023804 MiB 00:28:53.729 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:28:53.729 element at address: 0x200000859d40 with size: 0.016174 MiB 00:28:53.729 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59158 00:28:53.729 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:28:53.729 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:28:53.729 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:28:53.729 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59158 00:28:53.729 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:28:53.729 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59158 00:28:53.729 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:28:53.729 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59158 00:28:53.729 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:28:53.729 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:28:53.729 13:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:28:53.729 13:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59158 00:28:53.729 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59158 ']' 00:28:53.729 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59158 00:28:53.729 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:28:53.729 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:53.729 13:49:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59158 00:28:53.729 13:49:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:53.729 killing process with pid 59158 00:28:53.729 13:49:51 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:53.729 13:49:51 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59158' 00:28:53.729 13:49:51 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59158 00:28:53.729 13:49:51 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59158 00:28:56.338 00:28:56.338 real 0m4.355s 00:28:56.338 user 0m4.307s 00:28:56.338 sys 0m0.627s 00:28:56.338 13:49:53 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.338 13:49:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:28:56.338 ************************************ 00:28:56.338 END TEST dpdk_mem_utility 00:28:56.338 ************************************ 00:28:56.625 13:49:53 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:28:56.625 13:49:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:56.625 13:49:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.625 13:49:53 -- common/autotest_common.sh@10 -- # set +x 00:28:56.625 ************************************ 00:28:56.625 START TEST event 00:28:56.625 ************************************ 00:28:56.625 13:49:53 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:28:56.625 * Looking for test storage... 00:28:56.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:28:56.625 13:49:53 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:56.625 13:49:53 event -- common/autotest_common.sh@1693 -- # lcov --version 00:28:56.625 13:49:53 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:56.625 13:49:53 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:56.625 13:49:53 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:56.625 13:49:53 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:56.625 13:49:53 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:56.625 13:49:53 event -- scripts/common.sh@336 -- # IFS=.-: 00:28:56.625 13:49:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:28:56.625 13:49:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:28:56.625 13:49:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:28:56.625 13:49:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:28:56.625 13:49:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:28:56.625 13:49:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:28:56.625 13:49:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:56.625 13:49:53 event -- scripts/common.sh@344 -- # case "$op" in 00:28:56.625 13:49:53 event -- scripts/common.sh@345 -- # : 1 00:28:56.625 13:49:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:56.625 13:49:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:56.625 13:49:53 event -- scripts/common.sh@365 -- # decimal 1 00:28:56.625 13:49:53 event -- scripts/common.sh@353 -- # local d=1 00:28:56.625 13:49:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:56.625 13:49:53 event -- scripts/common.sh@355 -- # echo 1 00:28:56.625 13:49:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:28:56.625 13:49:53 event -- scripts/common.sh@366 -- # decimal 2 00:28:56.625 13:49:53 event -- scripts/common.sh@353 -- # local d=2 00:28:56.625 13:49:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:56.625 13:49:53 event -- scripts/common.sh@355 -- # echo 2 00:28:56.625 13:49:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:28:56.625 13:49:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:56.625 13:49:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:56.625 13:49:53 event -- scripts/common.sh@368 -- # return 0 00:28:56.625 13:49:53 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:56.625 13:49:53 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:56.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.625 --rc genhtml_branch_coverage=1 00:28:56.625 --rc genhtml_function_coverage=1 00:28:56.625 --rc genhtml_legend=1 00:28:56.625 --rc geninfo_all_blocks=1 00:28:56.625 --rc geninfo_unexecuted_blocks=1 00:28:56.625 00:28:56.625 ' 00:28:56.625 13:49:53 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:56.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.625 --rc genhtml_branch_coverage=1 00:28:56.625 --rc genhtml_function_coverage=1 00:28:56.625 --rc genhtml_legend=1 00:28:56.625 --rc geninfo_all_blocks=1 00:28:56.625 --rc geninfo_unexecuted_blocks=1 00:28:56.625 00:28:56.625 ' 00:28:56.625 13:49:53 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:56.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.625 --rc genhtml_branch_coverage=1 00:28:56.625 --rc genhtml_function_coverage=1 00:28:56.625 --rc genhtml_legend=1 00:28:56.625 --rc geninfo_all_blocks=1 00:28:56.625 --rc geninfo_unexecuted_blocks=1 00:28:56.625 00:28:56.625 ' 00:28:56.625 13:49:53 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:56.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.625 --rc genhtml_branch_coverage=1 00:28:56.625 --rc genhtml_function_coverage=1 00:28:56.625 --rc genhtml_legend=1 00:28:56.625 --rc geninfo_all_blocks=1 00:28:56.625 --rc geninfo_unexecuted_blocks=1 00:28:56.625 00:28:56.625 ' 00:28:56.625 13:49:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:56.625 13:49:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:28:56.625 13:49:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:28:56.625 13:49:53 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:28:56.625 13:49:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.625 13:49:53 event -- common/autotest_common.sh@10 -- # set +x 00:28:56.625 ************************************ 00:28:56.625 START TEST event_perf 00:28:56.625 ************************************ 00:28:56.625 13:49:53 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:28:56.625 Running I/O for 1 seconds...[2024-11-20 13:49:53.927284] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:28:56.625 [2024-11-20 13:49:53.927441] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59271 ] 00:28:56.883 [2024-11-20 13:49:54.128139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:57.141 [2024-11-20 13:49:54.310052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.141 [2024-11-20 13:49:54.310266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.141 [2024-11-20 13:49:54.310431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.141 Running I/O for 1 seconds...[2024-11-20 13:49:54.310455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.516 00:28:58.516 lcore 0: 176956 00:28:58.516 lcore 1: 176955 00:28:58.516 lcore 2: 176954 00:28:58.516 lcore 3: 176956 00:28:58.516 done. 00:28:58.516 00:28:58.516 real 0m1.693s 00:28:58.516 user 0m4.429s 00:28:58.516 sys 0m0.142s 00:28:58.516 13:49:55 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.516 ************************************ 00:28:58.516 END TEST event_perf 00:28:58.516 13:49:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:28:58.516 ************************************ 00:28:58.516 13:49:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:28:58.516 13:49:55 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:58.516 13:49:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.516 13:49:55 event -- common/autotest_common.sh@10 -- # set +x 00:28:58.516 ************************************ 00:28:58.516 START TEST event_reactor 00:28:58.516 ************************************ 00:28:58.516 13:49:55 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:28:58.516 [2024-11-20 13:49:55.664318] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:28:58.516 [2024-11-20 13:49:55.664460] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59311 ] 00:28:58.774 [2024-11-20 13:49:55.838176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.774 [2024-11-20 13:49:55.962927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.147 test_start 00:29:00.147 oneshot 00:29:00.147 tick 100 00:29:00.147 tick 100 00:29:00.147 tick 250 00:29:00.147 tick 100 00:29:00.147 tick 100 00:29:00.147 tick 100 00:29:00.147 tick 250 00:29:00.147 tick 500 00:29:00.147 tick 100 00:29:00.147 tick 100 00:29:00.147 tick 250 00:29:00.147 tick 100 00:29:00.147 tick 100 00:29:00.147 test_end 00:29:00.147 00:29:00.147 real 0m1.606s 00:29:00.147 user 0m1.402s 00:29:00.147 sys 0m0.095s 00:29:00.147 13:49:57 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.147 13:49:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:29:00.147 ************************************ 00:29:00.147 END TEST event_reactor 00:29:00.147 ************************************ 00:29:00.147 13:49:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:29:00.147 13:49:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:00.147 13:49:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.147 13:49:57 event -- common/autotest_common.sh@10 -- # set +x 00:29:00.147 ************************************ 00:29:00.147 START TEST event_reactor_perf 00:29:00.147 ************************************ 00:29:00.147 13:49:57 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:29:00.147 [2024-11-20 13:49:57.340150] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:00.147 [2024-11-20 13:49:57.340310] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59347 ] 00:29:00.405 [2024-11-20 13:49:57.533027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.405 [2024-11-20 13:49:57.657516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.849 test_start 00:29:01.849 test_end 00:29:01.849 Performance: 339182 events per second 00:29:01.849 00:29:01.849 real 0m1.623s 00:29:01.849 user 0m1.385s 00:29:01.849 sys 0m0.128s 00:29:01.849 13:49:58 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.849 13:49:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:29:01.849 ************************************ 00:29:01.849 END TEST event_reactor_perf 00:29:01.849 ************************************ 00:29:01.849 13:49:58 event -- event/event.sh@49 -- # uname -s 00:29:01.849 13:49:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:29:01.849 13:49:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:29:01.849 13:49:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:01.849 13:49:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.849 13:49:58 event -- common/autotest_common.sh@10 -- # set +x 00:29:01.849 ************************************ 00:29:01.849 START TEST event_scheduler 00:29:01.849 ************************************ 00:29:01.849 13:49:58 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:29:01.849 * Looking for test storage... 00:29:01.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:29:01.849 13:49:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:01.849 13:49:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:01.850 13:49:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:29:01.850 13:49:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:01.850 13:49:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:29:01.850 13:49:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:01.850 13:49:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:01.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.850 --rc genhtml_branch_coverage=1 00:29:01.850 --rc genhtml_function_coverage=1 00:29:01.850 --rc genhtml_legend=1 00:29:01.850 --rc geninfo_all_blocks=1 00:29:01.850 --rc geninfo_unexecuted_blocks=1 00:29:01.850 00:29:01.850 ' 00:29:01.850 13:49:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:01.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.850 --rc genhtml_branch_coverage=1 00:29:01.850 --rc genhtml_function_coverage=1 00:29:01.850 --rc genhtml_legend=1 00:29:01.850 --rc geninfo_all_blocks=1 00:29:01.850 --rc geninfo_unexecuted_blocks=1 00:29:01.850 00:29:01.850 ' 00:29:01.850 13:49:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:01.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.850 --rc genhtml_branch_coverage=1 00:29:01.850 --rc genhtml_function_coverage=1 00:29:01.850 --rc genhtml_legend=1 00:29:01.850 --rc geninfo_all_blocks=1 00:29:01.850 --rc geninfo_unexecuted_blocks=1 00:29:01.850 00:29:01.850 ' 00:29:01.850 13:49:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:01.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.850 --rc genhtml_branch_coverage=1 00:29:01.850 --rc genhtml_function_coverage=1 00:29:01.850 --rc genhtml_legend=1 00:29:01.850 --rc geninfo_all_blocks=1 00:29:01.850 --rc geninfo_unexecuted_blocks=1 00:29:01.850 00:29:01.850 ' 00:29:01.850 13:49:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:29:01.850 13:49:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59424 00:29:01.850 13:49:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:29:01.850 13:49:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59424 00:29:01.850 13:49:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:29:01.850 13:49:59 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59424 ']' 00:29:01.850 13:49:59 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.850 13:49:59 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.850 13:49:59 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.850 13:49:59 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.850 13:49:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:02.108 [2024-11-20 13:49:59.285084] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:02.108 [2024-11-20 13:49:59.285277] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59424 ] 00:29:02.365 [2024-11-20 13:49:59.489669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:02.365 [2024-11-20 13:49:59.670808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.365 [2024-11-20 13:49:59.670981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.365 [2024-11-20 13:49:59.671184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.365 [2024-11-20 13:49:59.671211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:02.933 13:50:00 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.933 13:50:00 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:29:02.933 13:50:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:29:02.933 13:50:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.933 13:50:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:02.933 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:29:02.933 POWER: Cannot set governor of lcore 0 to userspace 00:29:02.933 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:29:02.933 POWER: Cannot set governor of lcore 0 to performance 00:29:02.933 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:29:02.933 POWER: Cannot set governor of lcore 0 to userspace 00:29:02.933 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:29:02.933 POWER: Cannot set governor of lcore 0 to userspace 00:29:02.933 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:29:02.933 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:29:02.933 POWER: Unable to set Power Management Environment for lcore 0 00:29:02.933 [2024-11-20 13:50:00.137219] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:29:02.933 [2024-11-20 13:50:00.137250] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:29:02.933 [2024-11-20 13:50:00.137263] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:29:02.933 [2024-11-20 13:50:00.137287] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:29:02.933 [2024-11-20 13:50:00.137300] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:29:02.933 [2024-11-20 13:50:00.137316] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:29:02.933 13:50:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.933 13:50:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:29:02.933 13:50:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.933 13:50:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:03.499 [2024-11-20 13:50:00.516988] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:29:03.499 13:50:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.499 13:50:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:29:03.499 13:50:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:03.499 13:50:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.499 13:50:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:03.499 ************************************ 00:29:03.500 START TEST scheduler_create_thread 00:29:03.500 ************************************ 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:03.500 2 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:03.500 3 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:03.500 4 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:03.500 5 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:03.500 6 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:03.500 7 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:03.500 8 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:03.500 9 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:03.500 10 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.500 13:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:04.873 13:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.873 13:50:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:29:04.873 13:50:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:29:04.873 13:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.873 13:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:06.246 13:50:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.246 00:29:06.246 real 0m2.619s 00:29:06.246 user 0m0.023s 00:29:06.246 sys 0m0.009s 00:29:06.246 13:50:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:06.246 13:50:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:06.246 ************************************ 00:29:06.246 END TEST scheduler_create_thread 00:29:06.246 ************************************ 00:29:06.246 13:50:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:29:06.246 13:50:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59424 00:29:06.246 13:50:03 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59424 ']' 00:29:06.246 13:50:03 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59424 00:29:06.246 13:50:03 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:29:06.246 13:50:03 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:06.246 13:50:03 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59424 00:29:06.246 13:50:03 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:06.246 13:50:03 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:06.246 killing process with pid 59424 00:29:06.246 13:50:03 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59424' 00:29:06.246 13:50:03 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59424 00:29:06.246 13:50:03 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59424 00:29:06.504 [2024-11-20 13:50:03.632028] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:29:07.879 00:29:07.879 real 0m6.004s 00:29:07.879 user 0m9.920s 00:29:07.879 sys 0m0.553s 00:29:07.879 13:50:04 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.879 13:50:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:07.879 ************************************ 00:29:07.879 END TEST event_scheduler 00:29:07.879 ************************************ 00:29:07.879 13:50:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:29:07.879 13:50:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:29:07.879 13:50:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:07.879 13:50:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.879 13:50:05 event -- common/autotest_common.sh@10 -- # set +x 00:29:07.879 ************************************ 00:29:07.879 START TEST app_repeat 00:29:07.879 ************************************ 00:29:07.879 13:50:05 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:29:07.879 13:50:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:07.879 13:50:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:07.879 13:50:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:29:07.879 13:50:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:07.879 13:50:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:29:07.879 13:50:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:29:07.879 13:50:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:29:07.879 13:50:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59536 00:29:07.879 13:50:05 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:29:07.879 13:50:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:29:07.879 Process app_repeat pid: 59536 00:29:07.879 13:50:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59536' 00:29:07.879 13:50:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:29:07.879 spdk_app_start Round 0 00:29:07.879 13:50:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:29:07.879 13:50:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59536 /var/tmp/spdk-nbd.sock 00:29:07.879 13:50:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59536 ']' 00:29:07.879 13:50:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:07.879 13:50:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:07.879 13:50:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:07.879 13:50:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.879 13:50:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:29:07.879 [2024-11-20 13:50:05.138634] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:07.879 [2024-11-20 13:50:05.138811] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59536 ] 00:29:08.137 [2024-11-20 13:50:05.338259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:08.395 [2024-11-20 13:50:05.519249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.395 [2024-11-20 13:50:05.519269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.977 13:50:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.977 13:50:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:29:08.977 13:50:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:29:09.236 Malloc0 00:29:09.236 13:50:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:29:09.804 Malloc1 00:29:09.804 13:50:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:09.804 13:50:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:29:10.062 /dev/nbd0 00:29:10.062 13:50:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:10.062 13:50:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:10.062 13:50:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:10.062 13:50:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:29:10.062 13:50:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:10.062 13:50:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:10.062 13:50:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:10.062 13:50:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:29:10.062 13:50:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:10.062 13:50:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:10.062 13:50:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:29:10.062 1+0 records in 00:29:10.062 1+0 records out 00:29:10.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465923 s, 8.8 MB/s 00:29:10.062 13:50:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:10.062 13:50:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:29:10.062 13:50:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:10.062 13:50:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:10.062 13:50:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:29:10.062 13:50:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:10.062 13:50:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:10.062 13:50:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:29:10.320 /dev/nbd1 00:29:10.320 13:50:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:10.320 13:50:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:10.320 13:50:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:10.320 13:50:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:29:10.320 13:50:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:10.320 13:50:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:10.320 13:50:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:10.320 13:50:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:29:10.320 13:50:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:10.320 13:50:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:10.320 13:50:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:29:10.320 1+0 records in 00:29:10.320 1+0 records out 00:29:10.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038119 s, 10.7 MB/s 00:29:10.320 13:50:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:10.320 13:50:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:29:10.320 13:50:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:10.320 13:50:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:10.320 13:50:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:29:10.320 13:50:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:10.320 13:50:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:10.321 13:50:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:10.321 13:50:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:10.321 13:50:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:10.578 { 00:29:10.578 "nbd_device": "/dev/nbd0", 00:29:10.578 "bdev_name": "Malloc0" 00:29:10.578 }, 00:29:10.578 { 00:29:10.578 "nbd_device": "/dev/nbd1", 00:29:10.578 "bdev_name": "Malloc1" 00:29:10.578 } 00:29:10.578 ]' 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:10.578 { 00:29:10.578 "nbd_device": "/dev/nbd0", 00:29:10.578 "bdev_name": "Malloc0" 00:29:10.578 }, 00:29:10.578 { 00:29:10.578 "nbd_device": "/dev/nbd1", 00:29:10.578 "bdev_name": "Malloc1" 00:29:10.578 } 00:29:10.578 ]' 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:10.578 /dev/nbd1' 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:10.578 /dev/nbd1' 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:29:10.578 256+0 records in 00:29:10.578 256+0 records out 00:29:10.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128644 s, 81.5 MB/s 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:10.578 13:50:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:10.836 256+0 records in 00:29:10.836 256+0 records out 00:29:10.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309941 s, 33.8 MB/s 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:10.836 256+0 records in 00:29:10.836 256+0 records out 00:29:10.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247203 s, 42.4 MB/s 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:10.836 13:50:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:10.837 13:50:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:29:10.837 13:50:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:10.837 13:50:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:11.095 13:50:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:11.095 13:50:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:11.095 13:50:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:11.095 13:50:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:11.095 13:50:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:11.095 13:50:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:11.095 13:50:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:29:11.095 13:50:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:29:11.095 13:50:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:11.095 13:50:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:11.354 13:50:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:11.354 13:50:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:11.354 13:50:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:11.354 13:50:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:11.354 13:50:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:11.354 13:50:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:11.354 13:50:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:29:11.354 13:50:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:29:11.354 13:50:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:11.354 13:50:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:11.354 13:50:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:11.613 13:50:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:11.613 13:50:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:11.613 13:50:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:11.613 13:50:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:11.613 13:50:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:29:11.613 13:50:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:11.613 13:50:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:29:11.613 13:50:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:29:11.613 13:50:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:29:11.613 13:50:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:29:11.613 13:50:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:11.613 13:50:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:29:11.613 13:50:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:29:12.180 13:50:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:29:13.554 [2024-11-20 13:50:10.491827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:13.554 [2024-11-20 13:50:10.608177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.555 [2024-11-20 13:50:10.608183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.555 [2024-11-20 13:50:10.816446] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:29:13.555 [2024-11-20 13:50:10.816552] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:29:15.454 13:50:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:29:15.454 spdk_app_start Round 1 00:29:15.454 13:50:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:29:15.454 13:50:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59536 /var/tmp/spdk-nbd.sock 00:29:15.454 13:50:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59536 ']' 00:29:15.454 13:50:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:15.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:15.454 13:50:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.454 13:50:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:15.454 13:50:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.454 13:50:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:29:15.454 13:50:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.454 13:50:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:29:15.454 13:50:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:29:15.711 Malloc0 00:29:15.711 13:50:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:29:15.969 Malloc1 00:29:15.969 13:50:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:15.969 13:50:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:29:16.228 /dev/nbd0 00:29:16.228 13:50:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:16.228 13:50:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:16.228 13:50:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:16.228 13:50:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:29:16.228 13:50:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:16.228 13:50:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:16.228 13:50:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:16.228 13:50:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:29:16.228 13:50:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:16.228 13:50:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:16.228 13:50:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:29:16.228 1+0 records in 00:29:16.228 1+0 records out 00:29:16.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264906 s, 15.5 MB/s 00:29:16.228 13:50:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:16.228 13:50:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:29:16.228 13:50:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:16.228 13:50:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:16.228 13:50:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:29:16.228 13:50:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:16.228 13:50:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:16.228 13:50:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:29:16.808 /dev/nbd1 00:29:16.808 13:50:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:16.808 13:50:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:16.808 13:50:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:16.808 13:50:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:29:16.808 13:50:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:16.808 13:50:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:16.808 13:50:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:16.808 13:50:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:29:16.808 13:50:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:16.808 13:50:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:16.808 13:50:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:29:16.808 1+0 records in 00:29:16.808 1+0 records out 00:29:16.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446156 s, 9.2 MB/s 00:29:16.808 13:50:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:16.808 13:50:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:29:16.808 13:50:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:16.808 13:50:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:16.808 13:50:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:29:16.808 13:50:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:16.808 13:50:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:16.808 13:50:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:16.808 13:50:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:16.808 13:50:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:16.808 13:50:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:16.808 { 00:29:16.808 "nbd_device": "/dev/nbd0", 00:29:16.808 "bdev_name": "Malloc0" 00:29:16.808 }, 00:29:16.808 { 00:29:16.808 "nbd_device": "/dev/nbd1", 00:29:16.808 "bdev_name": "Malloc1" 00:29:16.808 } 00:29:16.808 ]' 00:29:16.808 13:50:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:16.808 13:50:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:16.808 { 00:29:16.808 "nbd_device": "/dev/nbd0", 00:29:16.808 "bdev_name": "Malloc0" 00:29:16.808 }, 00:29:16.808 { 00:29:16.808 "nbd_device": "/dev/nbd1", 00:29:16.808 "bdev_name": "Malloc1" 00:29:16.808 } 00:29:16.808 ]' 00:29:16.808 13:50:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:16.808 /dev/nbd1' 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:17.065 /dev/nbd1' 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:29:17.065 256+0 records in 00:29:17.065 256+0 records out 00:29:17.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00896307 s, 117 MB/s 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:17.065 256+0 records in 00:29:17.065 256+0 records out 00:29:17.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032006 s, 32.8 MB/s 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:17.065 256+0 records in 00:29:17.065 256+0 records out 00:29:17.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0376484 s, 27.9 MB/s 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:29:17.065 13:50:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:17.066 13:50:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:17.066 13:50:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:17.066 13:50:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:17.066 13:50:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:29:17.066 13:50:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:17.066 13:50:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:17.324 13:50:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:17.324 13:50:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:17.324 13:50:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:17.324 13:50:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:17.324 13:50:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:17.324 13:50:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:17.324 13:50:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:29:17.324 13:50:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:29:17.324 13:50:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:17.324 13:50:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:17.581 13:50:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:17.581 13:50:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:17.581 13:50:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:17.581 13:50:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:17.581 13:50:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:17.581 13:50:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:17.581 13:50:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:29:17.581 13:50:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:29:17.581 13:50:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:17.581 13:50:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:17.581 13:50:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:18.147 13:50:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:18.147 13:50:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:18.147 13:50:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:18.147 13:50:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:18.147 13:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:29:18.147 13:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:18.147 13:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:29:18.147 13:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:29:18.147 13:50:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:29:18.147 13:50:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:29:18.147 13:50:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:18.147 13:50:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:29:18.147 13:50:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:29:18.405 13:50:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:29:19.778 [2024-11-20 13:50:16.934501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:19.778 [2024-11-20 13:50:17.055312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.778 [2024-11-20 13:50:17.055330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.039 [2024-11-20 13:50:17.262566] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:29:20.039 [2024-11-20 13:50:17.262662] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:29:21.436 spdk_app_start Round 2 00:29:21.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:21.436 13:50:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:29:21.436 13:50:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:29:21.436 13:50:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59536 /var/tmp/spdk-nbd.sock 00:29:21.436 13:50:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59536 ']' 00:29:21.436 13:50:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:21.436 13:50:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.436 13:50:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:21.436 13:50:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.436 13:50:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:29:21.695 13:50:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.695 13:50:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:29:21.695 13:50:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:29:22.261 Malloc0 00:29:22.261 13:50:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:29:22.519 Malloc1 00:29:22.519 13:50:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:22.519 13:50:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:29:22.778 /dev/nbd0 00:29:22.778 13:50:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:22.778 13:50:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:22.778 13:50:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:22.778 13:50:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:29:22.778 13:50:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:22.778 13:50:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:22.778 13:50:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:22.778 13:50:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:29:23.037 13:50:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:23.037 13:50:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:23.037 13:50:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:29:23.037 1+0 records in 00:29:23.037 1+0 records out 00:29:23.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267368 s, 15.3 MB/s 00:29:23.037 13:50:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:23.037 13:50:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:29:23.037 13:50:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:23.037 13:50:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:23.037 13:50:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:29:23.037 13:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:23.037 13:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:23.037 13:50:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:29:23.037 /dev/nbd1 00:29:23.037 13:50:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:23.296 13:50:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:23.296 13:50:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:23.296 13:50:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:29:23.296 13:50:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:23.296 13:50:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:23.296 13:50:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:23.296 13:50:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:29:23.296 13:50:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:23.296 13:50:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:23.296 13:50:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:29:23.296 1+0 records in 00:29:23.296 1+0 records out 00:29:23.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325417 s, 12.6 MB/s 00:29:23.296 13:50:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:23.296 13:50:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:29:23.296 13:50:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:23.296 13:50:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:23.296 13:50:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:29:23.296 13:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:23.297 13:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:23.297 13:50:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:23.297 13:50:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:23.297 13:50:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:23.557 { 00:29:23.557 "nbd_device": "/dev/nbd0", 00:29:23.557 "bdev_name": "Malloc0" 00:29:23.557 }, 00:29:23.557 { 00:29:23.557 "nbd_device": "/dev/nbd1", 00:29:23.557 "bdev_name": "Malloc1" 00:29:23.557 } 00:29:23.557 ]' 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:23.557 { 00:29:23.557 "nbd_device": "/dev/nbd0", 00:29:23.557 "bdev_name": "Malloc0" 00:29:23.557 }, 00:29:23.557 { 00:29:23.557 "nbd_device": "/dev/nbd1", 00:29:23.557 "bdev_name": "Malloc1" 00:29:23.557 } 00:29:23.557 ]' 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:23.557 /dev/nbd1' 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:23.557 /dev/nbd1' 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:29:23.557 256+0 records in 00:29:23.557 256+0 records out 00:29:23.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127723 s, 82.1 MB/s 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:23.557 256+0 records in 00:29:23.557 256+0 records out 00:29:23.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259386 s, 40.4 MB/s 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:23.557 13:50:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:23.816 256+0 records in 00:29:23.816 256+0 records out 00:29:23.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307631 s, 34.1 MB/s 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:23.816 13:50:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:24.074 13:50:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:24.074 13:50:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:24.074 13:50:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:24.074 13:50:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.074 13:50:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.074 13:50:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:24.074 13:50:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:29:24.074 13:50:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.074 13:50:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:24.074 13:50:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:24.333 13:50:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:24.333 13:50:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:24.333 13:50:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:24.333 13:50:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.333 13:50:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.333 13:50:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:24.333 13:50:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:29:24.334 13:50:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.334 13:50:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:24.334 13:50:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:24.334 13:50:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:24.596 13:50:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:24.596 13:50:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:24.597 13:50:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:24.597 13:50:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:24.597 13:50:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:29:24.597 13:50:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:24.597 13:50:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:29:24.597 13:50:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:29:24.597 13:50:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:29:24.597 13:50:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:29:24.597 13:50:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:24.597 13:50:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:29:24.597 13:50:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:29:25.195 13:50:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:29:26.573 [2024-11-20 13:50:23.609446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:26.573 [2024-11-20 13:50:23.732555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.573 [2024-11-20 13:50:23.732556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.832 [2024-11-20 13:50:23.936619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:29:26.832 [2024-11-20 13:50:23.936724] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:29:28.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:28.209 13:50:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59536 /var/tmp/spdk-nbd.sock 00:29:28.209 13:50:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59536 ']' 00:29:28.209 13:50:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:28.209 13:50:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.209 13:50:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:28.209 13:50:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.209 13:50:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:29:28.467 13:50:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.467 13:50:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:29:28.467 13:50:25 event.app_repeat -- event/event.sh@39 -- # killprocess 59536 00:29:28.467 13:50:25 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59536 ']' 00:29:28.467 13:50:25 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59536 00:29:28.467 13:50:25 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:29:28.467 13:50:25 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.467 13:50:25 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59536 00:29:28.467 killing process with pid 59536 00:29:28.467 13:50:25 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:28.467 13:50:25 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:28.467 13:50:25 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59536' 00:29:28.467 13:50:25 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59536 00:29:28.467 13:50:25 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59536 00:29:29.869 spdk_app_start is called in Round 0. 00:29:29.869 Shutdown signal received, stop current app iteration 00:29:29.869 Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 reinitialization... 00:29:29.869 spdk_app_start is called in Round 1. 00:29:29.869 Shutdown signal received, stop current app iteration 00:29:29.869 Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 reinitialization... 00:29:29.869 spdk_app_start is called in Round 2. 00:29:29.869 Shutdown signal received, stop current app iteration 00:29:29.869 Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 reinitialization... 00:29:29.869 spdk_app_start is called in Round 3. 00:29:29.869 Shutdown signal received, stop current app iteration 00:29:29.869 13:50:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:29:29.869 13:50:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:29:29.869 ************************************ 00:29:29.869 END TEST app_repeat 00:29:29.869 00:29:29.869 real 0m21.748s 00:29:29.869 user 0m47.260s 00:29:29.869 sys 0m3.708s 00:29:29.869 13:50:26 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.869 13:50:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:29:29.869 ************************************ 00:29:29.869 13:50:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:29:29.869 13:50:26 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:29:29.869 13:50:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:29.869 13:50:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.869 13:50:26 event -- common/autotest_common.sh@10 -- # set +x 00:29:29.869 ************************************ 00:29:29.869 START TEST cpu_locks 00:29:29.869 ************************************ 00:29:29.869 13:50:26 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:29:29.869 * Looking for test storage... 00:29:29.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:29:29.869 13:50:26 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:29.869 13:50:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:29:29.869 13:50:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:29.869 13:50:27 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.869 13:50:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:29:29.869 13:50:27 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.870 13:50:27 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:29.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.870 --rc genhtml_branch_coverage=1 00:29:29.870 --rc genhtml_function_coverage=1 00:29:29.870 --rc genhtml_legend=1 00:29:29.870 --rc geninfo_all_blocks=1 00:29:29.870 --rc geninfo_unexecuted_blocks=1 00:29:29.870 00:29:29.870 ' 00:29:29.870 13:50:27 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:29.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.870 --rc genhtml_branch_coverage=1 00:29:29.870 --rc genhtml_function_coverage=1 00:29:29.870 --rc genhtml_legend=1 00:29:29.870 --rc geninfo_all_blocks=1 00:29:29.870 --rc geninfo_unexecuted_blocks=1 00:29:29.870 00:29:29.870 ' 00:29:29.870 13:50:27 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:29.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.870 --rc genhtml_branch_coverage=1 00:29:29.870 --rc genhtml_function_coverage=1 00:29:29.870 --rc genhtml_legend=1 00:29:29.870 --rc geninfo_all_blocks=1 00:29:29.870 --rc geninfo_unexecuted_blocks=1 00:29:29.870 00:29:29.870 ' 00:29:29.870 13:50:27 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:29.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.870 --rc genhtml_branch_coverage=1 00:29:29.870 --rc genhtml_function_coverage=1 00:29:29.870 --rc genhtml_legend=1 00:29:29.870 --rc geninfo_all_blocks=1 00:29:29.870 --rc geninfo_unexecuted_blocks=1 00:29:29.870 00:29:29.870 ' 00:29:29.870 13:50:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:29:29.870 13:50:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:29:29.870 13:50:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:29:29.870 13:50:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:29:29.870 13:50:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:29.870 13:50:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.870 13:50:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:29:29.870 ************************************ 00:29:29.870 START TEST default_locks 00:29:29.870 ************************************ 00:29:29.870 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:29:29.870 13:50:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60012 00:29:29.870 13:50:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60012 00:29:29.870 13:50:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:29.870 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60012 ']' 00:29:29.870 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.870 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.870 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.870 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.870 13:50:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:29:30.129 [2024-11-20 13:50:27.251718] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:30.129 [2024-11-20 13:50:27.251922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60012 ] 00:29:30.387 [2024-11-20 13:50:27.453227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.387 [2024-11-20 13:50:27.577446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.322 13:50:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.322 13:50:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:29:31.322 13:50:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60012 00:29:31.322 13:50:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60012 00:29:31.322 13:50:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:29:31.889 13:50:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60012 00:29:31.889 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60012 ']' 00:29:31.889 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60012 00:29:31.889 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:29:31.889 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.889 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60012 00:29:31.889 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:31.889 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:31.889 killing process with pid 60012 00:29:31.889 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60012' 00:29:31.889 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60012 00:29:31.889 13:50:29 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60012 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60012 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60012 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60012 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60012 ']' 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:29:34.446 ERROR: process (pid: 60012) is no longer running 00:29:34.446 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60012) - No such process 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:29:34.446 00:29:34.446 real 0m4.581s 00:29:34.446 user 0m4.606s 00:29:34.446 sys 0m0.827s 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:34.446 ************************************ 00:29:34.446 END TEST default_locks 00:29:34.446 ************************************ 00:29:34.446 13:50:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:29:34.446 13:50:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:29:34.446 13:50:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:34.446 13:50:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:34.446 13:50:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:29:34.446 ************************************ 00:29:34.446 START TEST default_locks_via_rpc 00:29:34.446 ************************************ 00:29:34.446 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:29:34.446 13:50:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60097 00:29:34.446 13:50:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60097 00:29:34.446 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60097 ']' 00:29:34.446 13:50:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:34.446 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.446 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.446 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.446 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.446 13:50:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:34.705 [2024-11-20 13:50:31.844402] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:34.705 [2024-11-20 13:50:31.844558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60097 ] 00:29:34.705 [2024-11-20 13:50:32.022397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.964 [2024-11-20 13:50:32.150326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60097 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:29:35.901 13:50:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60097 00:29:36.469 13:50:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60097 00:29:36.469 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60097 ']' 00:29:36.469 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60097 00:29:36.469 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:29:36.469 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.469 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60097 00:29:36.469 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:36.469 killing process with pid 60097 00:29:36.469 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:36.469 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60097' 00:29:36.469 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60097 00:29:36.469 13:50:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60097 00:29:39.009 00:29:39.009 real 0m4.468s 00:29:39.009 user 0m4.472s 00:29:39.009 sys 0m0.764s 00:29:39.009 13:50:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.009 13:50:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:39.009 ************************************ 00:29:39.009 END TEST default_locks_via_rpc 00:29:39.009 ************************************ 00:29:39.009 13:50:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:29:39.009 13:50:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:39.009 13:50:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.009 13:50:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:29:39.009 ************************************ 00:29:39.009 START TEST non_locking_app_on_locked_coremask 00:29:39.009 ************************************ 00:29:39.009 13:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:29:39.009 13:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60172 00:29:39.009 13:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60172 /var/tmp/spdk.sock 00:29:39.009 13:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60172 ']' 00:29:39.009 13:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:39.009 13:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.009 13:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:39.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.009 13:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.009 13:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:39.009 13:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:39.268 [2024-11-20 13:50:36.367726] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:39.268 [2024-11-20 13:50:36.367877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60172 ] 00:29:39.268 [2024-11-20 13:50:36.536991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.528 [2024-11-20 13:50:36.684395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.464 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:40.464 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:29:40.464 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60194 00:29:40.464 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60194 /var/tmp/spdk2.sock 00:29:40.464 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60194 ']' 00:29:40.464 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:29:40.464 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:29:40.464 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:29:40.464 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:29:40.464 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.464 13:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:40.464 [2024-11-20 13:50:37.764810] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:40.464 [2024-11-20 13:50:37.765027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60194 ] 00:29:40.723 [2024-11-20 13:50:37.977638] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:29:40.723 [2024-11-20 13:50:37.977724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.019 [2024-11-20 13:50:38.241328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.552 13:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.552 13:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:29:43.552 13:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60172 00:29:43.552 13:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60172 00:29:43.552 13:50:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:29:44.487 13:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60172 00:29:44.487 13:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60172 ']' 00:29:44.487 13:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60172 00:29:44.487 13:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:29:44.487 13:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.487 13:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60172 00:29:44.487 13:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:44.488 13:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:44.488 13:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60172' 00:29:44.488 killing process with pid 60172 00:29:44.488 13:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60172 00:29:44.488 13:50:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60172 00:29:49.887 13:50:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60194 00:29:49.887 13:50:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60194 ']' 00:29:49.887 13:50:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60194 00:29:49.887 13:50:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:29:49.887 13:50:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.887 13:50:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60194 00:29:49.887 13:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:49.887 13:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:49.887 killing process with pid 60194 00:29:49.887 13:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60194' 00:29:49.887 13:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60194 00:29:49.887 13:50:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60194 00:29:53.168 00:29:53.168 real 0m13.510s 00:29:53.168 user 0m14.238s 00:29:53.168 sys 0m1.692s 00:29:53.168 13:50:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:53.168 13:50:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:53.168 ************************************ 00:29:53.168 END TEST non_locking_app_on_locked_coremask 00:29:53.168 ************************************ 00:29:53.168 13:50:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:29:53.168 13:50:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:53.168 13:50:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:53.168 13:50:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:29:53.168 ************************************ 00:29:53.168 START TEST locking_app_on_unlocked_coremask 00:29:53.168 ************************************ 00:29:53.168 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:29:53.168 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60358 00:29:53.168 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60358 /var/tmp/spdk.sock 00:29:53.168 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60358 ']' 00:29:53.168 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.168 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:29:53.168 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.168 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.168 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.168 13:50:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:53.168 [2024-11-20 13:50:49.938058] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:53.168 [2024-11-20 13:50:49.938195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60358 ] 00:29:53.168 [2024-11-20 13:50:50.116427] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:29:53.168 [2024-11-20 13:50:50.116494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.168 [2024-11-20 13:50:50.252773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.103 13:50:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.103 13:50:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:29:54.103 13:50:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:29:54.103 13:50:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60380 00:29:54.103 13:50:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60380 /var/tmp/spdk2.sock 00:29:54.103 13:50:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60380 ']' 00:29:54.103 13:50:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:29:54.103 13:50:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.103 13:50:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:29:54.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:29:54.103 13:50:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.103 13:50:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:54.103 [2024-11-20 13:50:51.350927] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:54.103 [2024-11-20 13:50:51.351063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60380 ] 00:29:54.362 [2024-11-20 13:50:51.546349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.620 [2024-11-20 13:50:51.813180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.152 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.152 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:29:57.152 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60380 00:29:57.152 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60380 00:29:57.152 13:50:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:29:58.088 13:50:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60358 00:29:58.088 13:50:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60358 ']' 00:29:58.088 13:50:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60358 00:29:58.088 13:50:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:29:58.088 13:50:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.088 13:50:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60358 00:29:58.088 13:50:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:58.088 13:50:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:58.088 killing process with pid 60358 00:29:58.088 13:50:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60358' 00:29:58.088 13:50:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60358 00:29:58.088 13:50:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60358 00:30:03.354 13:51:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60380 00:30:03.354 13:51:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60380 ']' 00:30:03.354 13:51:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60380 00:30:03.354 13:51:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:30:03.354 13:51:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:03.354 13:51:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60380 00:30:03.354 13:51:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:03.354 13:51:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:03.354 killing process with pid 60380 00:30:03.354 13:51:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60380' 00:30:03.354 13:51:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60380 00:30:03.354 13:51:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60380 00:30:06.637 00:30:06.637 real 0m13.506s 00:30:06.637 user 0m14.116s 00:30:06.637 sys 0m1.635s 00:30:06.637 13:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.637 13:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:06.637 ************************************ 00:30:06.637 END TEST locking_app_on_unlocked_coremask 00:30:06.637 ************************************ 00:30:06.637 13:51:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:30:06.637 13:51:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:06.637 13:51:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.637 13:51:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:06.637 ************************************ 00:30:06.637 START TEST locking_app_on_locked_coremask 00:30:06.637 ************************************ 00:30:06.637 13:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:30:06.637 13:51:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60549 00:30:06.637 13:51:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60549 /var/tmp/spdk.sock 00:30:06.637 13:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60549 ']' 00:30:06.637 13:51:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:06.637 13:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.637 13:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.637 13:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.637 13:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.637 13:51:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:06.637 [2024-11-20 13:51:03.540675] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:06.637 [2024-11-20 13:51:03.540889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60549 ] 00:30:06.637 [2024-11-20 13:51:03.735591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.637 [2024-11-20 13:51:03.871356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60566 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60566 /var/tmp/spdk2.sock 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60566 /var/tmp/spdk2.sock 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60566 /var/tmp/spdk2.sock 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60566 ']' 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:07.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:07.576 13:51:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:07.834 [2024-11-20 13:51:04.958733] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:07.834 [2024-11-20 13:51:04.958864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60566 ] 00:30:07.834 [2024-11-20 13:51:05.154071] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60549 has claimed it. 00:30:07.834 [2024-11-20 13:51:05.154140] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:30:08.402 ERROR: process (pid: 60566) is no longer running 00:30:08.402 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60566) - No such process 00:30:08.402 13:51:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:08.402 13:51:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:30:08.402 13:51:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:30:08.402 13:51:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:08.402 13:51:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:08.402 13:51:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:08.402 13:51:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60549 00:30:08.402 13:51:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60549 00:30:08.402 13:51:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:30:08.969 13:51:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60549 00:30:08.969 13:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60549 ']' 00:30:08.969 13:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60549 00:30:08.969 13:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:30:08.969 13:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.969 13:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60549 00:30:08.969 13:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:08.969 13:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:08.969 killing process with pid 60549 00:30:08.969 13:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60549' 00:30:08.969 13:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60549 00:30:08.969 13:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60549 00:30:11.502 00:30:11.502 real 0m5.411s 00:30:11.502 user 0m5.690s 00:30:11.502 sys 0m0.906s 00:30:11.502 13:51:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.502 13:51:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:11.502 ************************************ 00:30:11.502 END TEST locking_app_on_locked_coremask 00:30:11.502 ************************************ 00:30:11.801 13:51:08 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:30:11.801 13:51:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:11.801 13:51:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.801 13:51:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:11.801 ************************************ 00:30:11.801 START TEST locking_overlapped_coremask 00:30:11.801 ************************************ 00:30:11.801 13:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:30:11.801 13:51:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60641 00:30:11.801 13:51:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60641 /var/tmp/spdk.sock 00:30:11.801 13:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60641 ']' 00:30:11.801 13:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.801 13:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.801 13:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.801 13:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.801 13:51:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:30:11.801 13:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:11.801 [2024-11-20 13:51:09.008990] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:11.801 [2024-11-20 13:51:09.009163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60641 ] 00:30:12.104 [2024-11-20 13:51:09.202142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:12.104 [2024-11-20 13:51:09.337773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.104 [2024-11-20 13:51:09.337885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.104 [2024-11-20 13:51:09.337904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60659 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60659 /var/tmp/spdk2.sock 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60659 /var/tmp/spdk2.sock 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60659 /var/tmp/spdk2.sock 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60659 ']' 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:13.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:13.041 13:51:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:13.300 [2024-11-20 13:51:10.433451] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:13.300 [2024-11-20 13:51:10.433624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60659 ] 00:30:13.559 [2024-11-20 13:51:10.635838] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60641 has claimed it. 00:30:13.559 [2024-11-20 13:51:10.635923] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:30:13.819 ERROR: process (pid: 60659) is no longer running 00:30:13.819 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60659) - No such process 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60641 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60641 ']' 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60641 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60641 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:13.819 killing process with pid 60641 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60641' 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60641 00:30:13.819 13:51:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60641 00:30:17.117 00:30:17.117 real 0m4.990s 00:30:17.117 user 0m13.508s 00:30:17.117 sys 0m0.686s 00:30:17.117 13:51:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:17.117 13:51:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:17.117 ************************************ 00:30:17.117 END TEST locking_overlapped_coremask 00:30:17.117 ************************************ 00:30:17.117 13:51:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:30:17.118 13:51:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:17.118 13:51:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.118 13:51:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:17.118 ************************************ 00:30:17.118 START TEST locking_overlapped_coremask_via_rpc 00:30:17.118 ************************************ 00:30:17.118 13:51:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:30:17.118 13:51:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60729 00:30:17.118 13:51:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60729 /var/tmp/spdk.sock 00:30:17.118 13:51:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60729 ']' 00:30:17.118 13:51:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:30:17.118 13:51:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.118 13:51:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.118 13:51:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.118 13:51:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.118 13:51:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:17.118 [2024-11-20 13:51:14.032930] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:17.118 [2024-11-20 13:51:14.033066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60729 ] 00:30:17.118 [2024-11-20 13:51:14.206609] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:30:17.118 [2024-11-20 13:51:14.206679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:17.118 [2024-11-20 13:51:14.345076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.118 [2024-11-20 13:51:14.345208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.118 [2024-11-20 13:51:14.345237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:18.069 13:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:18.069 13:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:18.069 13:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60752 00:30:18.069 13:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:30:18.069 13:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60752 /var/tmp/spdk2.sock 00:30:18.069 13:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60752 ']' 00:30:18.069 13:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:18.069 13:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:18.069 13:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:18.069 13:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.069 13:51:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:18.328 [2024-11-20 13:51:15.488766] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:18.328 [2024-11-20 13:51:15.489620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60752 ] 00:30:18.587 [2024-11-20 13:51:15.706242] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:30:18.587 [2024-11-20 13:51:15.706321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:18.846 [2024-11-20 13:51:15.984305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:18.846 [2024-11-20 13:51:15.987608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:18.846 [2024-11-20 13:51:15.987635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:21.381 [2024-11-20 13:51:18.379715] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60729 has claimed it. 00:30:21.381 request: 00:30:21.381 { 00:30:21.381 "method": "framework_enable_cpumask_locks", 00:30:21.381 "req_id": 1 00:30:21.381 } 00:30:21.381 Got JSON-RPC error response 00:30:21.381 response: 00:30:21.381 { 00:30:21.381 "code": -32603, 00:30:21.381 "message": "Failed to claim CPU core: 2" 00:30:21.381 } 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60729 /var/tmp/spdk.sock 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60729 ']' 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60752 /var/tmp/spdk2.sock 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60752 ']' 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:21.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.381 13:51:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:21.948 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.948 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:21.948 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:30:21.948 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:30:21.948 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:30:21.948 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:30:21.948 ************************************ 00:30:21.948 END TEST locking_overlapped_coremask_via_rpc 00:30:21.948 ************************************ 00:30:21.948 00:30:21.948 real 0m5.098s 00:30:21.948 user 0m1.918s 00:30:21.948 sys 0m0.328s 00:30:21.948 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.948 13:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:21.948 13:51:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:30:21.948 13:51:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60729 ]] 00:30:21.948 13:51:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60729 00:30:21.948 13:51:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60729 ']' 00:30:21.948 13:51:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60729 00:30:21.948 13:51:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:30:21.948 13:51:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.948 13:51:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60729 00:30:21.948 killing process with pid 60729 00:30:21.948 13:51:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:21.948 13:51:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:21.948 13:51:19 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60729' 00:30:21.948 13:51:19 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60729 00:30:21.948 13:51:19 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60729 00:30:25.237 13:51:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60752 ]] 00:30:25.237 13:51:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60752 00:30:25.237 13:51:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60752 ']' 00:30:25.237 13:51:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60752 00:30:25.237 13:51:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:30:25.237 13:51:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:25.237 13:51:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60752 00:30:25.237 killing process with pid 60752 00:30:25.237 13:51:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:30:25.237 13:51:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:30:25.237 13:51:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60752' 00:30:25.237 13:51:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60752 00:30:25.237 13:51:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60752 00:30:27.769 13:51:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:30:27.770 13:51:24 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:30:27.770 13:51:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60729 ]] 00:30:27.770 13:51:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60729 00:30:27.770 13:51:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60729 ']' 00:30:27.770 13:51:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60729 00:30:27.770 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60729) - No such process 00:30:27.770 Process with pid 60729 is not found 00:30:27.770 13:51:24 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60729 is not found' 00:30:27.770 Process with pid 60752 is not found 00:30:27.770 13:51:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60752 ]] 00:30:27.770 13:51:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60752 00:30:27.770 13:51:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60752 ']' 00:30:27.770 13:51:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60752 00:30:27.770 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60752) - No such process 00:30:27.770 13:51:24 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60752 is not found' 00:30:27.770 13:51:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:30:27.770 00:30:27.770 real 0m57.979s 00:30:27.770 user 1m40.757s 00:30:27.770 sys 0m8.130s 00:30:27.770 ************************************ 00:30:27.770 END TEST cpu_locks 00:30:27.770 ************************************ 00:30:27.770 13:51:24 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.770 13:51:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:27.770 ************************************ 00:30:27.770 END TEST event 00:30:27.770 ************************************ 00:30:27.770 00:30:27.770 real 1m31.227s 00:30:27.770 user 2m45.397s 00:30:27.770 sys 0m13.085s 00:30:27.770 13:51:24 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.770 13:51:24 event -- common/autotest_common.sh@10 -- # set +x 00:30:27.770 13:51:24 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:30:27.770 13:51:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:27.770 13:51:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.770 13:51:24 -- common/autotest_common.sh@10 -- # set +x 00:30:27.770 ************************************ 00:30:27.770 START TEST thread 00:30:27.770 ************************************ 00:30:27.770 13:51:24 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:30:27.770 * Looking for test storage... 00:30:27.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:30:27.770 13:51:25 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:27.770 13:51:25 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:27.770 13:51:25 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:30:28.073 13:51:25 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:28.073 13:51:25 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:28.073 13:51:25 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:28.073 13:51:25 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:28.073 13:51:25 thread -- scripts/common.sh@336 -- # IFS=.-: 00:30:28.073 13:51:25 thread -- scripts/common.sh@336 -- # read -ra ver1 00:30:28.073 13:51:25 thread -- scripts/common.sh@337 -- # IFS=.-: 00:30:28.073 13:51:25 thread -- scripts/common.sh@337 -- # read -ra ver2 00:30:28.073 13:51:25 thread -- scripts/common.sh@338 -- # local 'op=<' 00:30:28.073 13:51:25 thread -- scripts/common.sh@340 -- # ver1_l=2 00:30:28.073 13:51:25 thread -- scripts/common.sh@341 -- # ver2_l=1 00:30:28.073 13:51:25 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:28.073 13:51:25 thread -- scripts/common.sh@344 -- # case "$op" in 00:30:28.073 13:51:25 thread -- scripts/common.sh@345 -- # : 1 00:30:28.073 13:51:25 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:28.073 13:51:25 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:28.073 13:51:25 thread -- scripts/common.sh@365 -- # decimal 1 00:30:28.073 13:51:25 thread -- scripts/common.sh@353 -- # local d=1 00:30:28.073 13:51:25 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:28.073 13:51:25 thread -- scripts/common.sh@355 -- # echo 1 00:30:28.073 13:51:25 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:30:28.073 13:51:25 thread -- scripts/common.sh@366 -- # decimal 2 00:30:28.073 13:51:25 thread -- scripts/common.sh@353 -- # local d=2 00:30:28.073 13:51:25 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:28.074 13:51:25 thread -- scripts/common.sh@355 -- # echo 2 00:30:28.074 13:51:25 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:30:28.074 13:51:25 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:28.074 13:51:25 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:28.074 13:51:25 thread -- scripts/common.sh@368 -- # return 0 00:30:28.074 13:51:25 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:28.074 13:51:25 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:28.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.074 --rc genhtml_branch_coverage=1 00:30:28.074 --rc genhtml_function_coverage=1 00:30:28.074 --rc genhtml_legend=1 00:30:28.074 --rc geninfo_all_blocks=1 00:30:28.074 --rc geninfo_unexecuted_blocks=1 00:30:28.074 00:30:28.074 ' 00:30:28.074 13:51:25 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:28.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.074 --rc genhtml_branch_coverage=1 00:30:28.074 --rc genhtml_function_coverage=1 00:30:28.074 --rc genhtml_legend=1 00:30:28.074 --rc geninfo_all_blocks=1 00:30:28.074 --rc geninfo_unexecuted_blocks=1 00:30:28.074 00:30:28.074 ' 00:30:28.074 13:51:25 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:28.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.074 --rc genhtml_branch_coverage=1 00:30:28.074 --rc genhtml_function_coverage=1 00:30:28.074 --rc genhtml_legend=1 00:30:28.074 --rc geninfo_all_blocks=1 00:30:28.074 --rc geninfo_unexecuted_blocks=1 00:30:28.074 00:30:28.074 ' 00:30:28.074 13:51:25 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:28.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.074 --rc genhtml_branch_coverage=1 00:30:28.074 --rc genhtml_function_coverage=1 00:30:28.074 --rc genhtml_legend=1 00:30:28.074 --rc geninfo_all_blocks=1 00:30:28.074 --rc geninfo_unexecuted_blocks=1 00:30:28.074 00:30:28.074 ' 00:30:28.074 13:51:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:30:28.074 13:51:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:30:28.074 13:51:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:28.074 13:51:25 thread -- common/autotest_common.sh@10 -- # set +x 00:30:28.074 ************************************ 00:30:28.074 START TEST thread_poller_perf 00:30:28.074 ************************************ 00:30:28.074 13:51:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:30:28.074 [2024-11-20 13:51:25.216216] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:28.074 [2024-11-20 13:51:25.216623] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60964 ] 00:30:28.331 [2024-11-20 13:51:25.413238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.331 [2024-11-20 13:51:25.551364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.331 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:30:29.705 [2024-11-20T13:51:27.028Z] ====================================== 00:30:29.705 [2024-11-20T13:51:27.028Z] busy:2110534404 (cyc) 00:30:29.705 [2024-11-20T13:51:27.028Z] total_run_count: 330000 00:30:29.705 [2024-11-20T13:51:27.028Z] tsc_hz: 2100000000 (cyc) 00:30:29.705 [2024-11-20T13:51:27.028Z] ====================================== 00:30:29.705 [2024-11-20T13:51:27.028Z] poller_cost: 6395 (cyc), 3045 (nsec) 00:30:29.705 ************************************ 00:30:29.705 END TEST thread_poller_perf 00:30:29.705 ************************************ 00:30:29.705 00:30:29.705 real 0m1.679s 00:30:29.705 user 0m1.453s 00:30:29.705 sys 0m0.115s 00:30:29.705 13:51:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.705 13:51:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:30:29.705 13:51:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:30:29.705 13:51:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:30:29.705 13:51:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.705 13:51:26 thread -- common/autotest_common.sh@10 -- # set +x 00:30:29.705 ************************************ 00:30:29.705 START TEST thread_poller_perf 00:30:29.705 ************************************ 00:30:29.705 13:51:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:30:29.705 [2024-11-20 13:51:26.953628] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:29.705 [2024-11-20 13:51:26.954010] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61006 ] 00:30:29.963 [2024-11-20 13:51:27.149863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.963 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:30:29.963 [2024-11-20 13:51:27.275735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.336 [2024-11-20T13:51:28.659Z] ====================================== 00:30:31.336 [2024-11-20T13:51:28.659Z] busy:2103845516 (cyc) 00:30:31.336 [2024-11-20T13:51:28.659Z] total_run_count: 4008000 00:30:31.336 [2024-11-20T13:51:28.659Z] tsc_hz: 2100000000 (cyc) 00:30:31.336 [2024-11-20T13:51:28.659Z] ====================================== 00:30:31.336 [2024-11-20T13:51:28.659Z] poller_cost: 524 (cyc), 249 (nsec) 00:30:31.336 ************************************ 00:30:31.336 END TEST thread_poller_perf 00:30:31.336 ************************************ 00:30:31.337 00:30:31.337 real 0m1.656s 00:30:31.337 user 0m1.427s 00:30:31.337 sys 0m0.120s 00:30:31.337 13:51:28 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:31.337 13:51:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:30:31.337 13:51:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:30:31.337 00:30:31.337 real 0m3.648s 00:30:31.337 user 0m3.044s 00:30:31.337 sys 0m0.387s 00:30:31.337 ************************************ 00:30:31.337 END TEST thread 00:30:31.337 ************************************ 00:30:31.337 13:51:28 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:31.337 13:51:28 thread -- common/autotest_common.sh@10 -- # set +x 00:30:31.337 13:51:28 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:30:31.337 13:51:28 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:30:31.337 13:51:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:31.337 13:51:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:31.337 13:51:28 -- common/autotest_common.sh@10 -- # set +x 00:30:31.337 ************************************ 00:30:31.337 START TEST app_cmdline 00:30:31.337 ************************************ 00:30:31.337 13:51:28 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:30:31.595 * Looking for test storage... 00:30:31.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:30:31.595 13:51:28 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:31.595 13:51:28 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:31.595 13:51:28 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:30:31.595 13:51:28 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@345 -- # : 1 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:30:31.595 13:51:28 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.596 13:51:28 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:30:31.596 13:51:28 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:30:31.596 13:51:28 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.596 13:51:28 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:30:31.596 13:51:28 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.596 13:51:28 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.596 13:51:28 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.596 13:51:28 app_cmdline -- scripts/common.sh@368 -- # return 0 00:30:31.596 13:51:28 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.596 13:51:28 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:31.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.596 --rc genhtml_branch_coverage=1 00:30:31.596 --rc genhtml_function_coverage=1 00:30:31.596 --rc genhtml_legend=1 00:30:31.596 --rc geninfo_all_blocks=1 00:30:31.596 --rc geninfo_unexecuted_blocks=1 00:30:31.596 00:30:31.596 ' 00:30:31.596 13:51:28 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:31.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.596 --rc genhtml_branch_coverage=1 00:30:31.596 --rc genhtml_function_coverage=1 00:30:31.596 --rc genhtml_legend=1 00:30:31.596 --rc geninfo_all_blocks=1 00:30:31.596 --rc geninfo_unexecuted_blocks=1 00:30:31.596 00:30:31.596 ' 00:30:31.596 13:51:28 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:31.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.596 --rc genhtml_branch_coverage=1 00:30:31.596 --rc genhtml_function_coverage=1 00:30:31.596 --rc genhtml_legend=1 00:30:31.596 --rc geninfo_all_blocks=1 00:30:31.596 --rc geninfo_unexecuted_blocks=1 00:30:31.596 00:30:31.596 ' 00:30:31.596 13:51:28 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:31.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.596 --rc genhtml_branch_coverage=1 00:30:31.596 --rc genhtml_function_coverage=1 00:30:31.596 --rc genhtml_legend=1 00:30:31.596 --rc geninfo_all_blocks=1 00:30:31.596 --rc geninfo_unexecuted_blocks=1 00:30:31.596 00:30:31.596 ' 00:30:31.596 13:51:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:30:31.596 13:51:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61095 00:30:31.596 13:51:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61095 00:30:31.596 13:51:28 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:30:31.596 13:51:28 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61095 ']' 00:30:31.596 13:51:28 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.596 13:51:28 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.596 13:51:28 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.596 13:51:28 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.596 13:51:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:30:31.854 [2024-11-20 13:51:29.008557] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:31.854 [2024-11-20 13:51:29.008985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61095 ] 00:30:32.115 [2024-11-20 13:51:29.205545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.115 [2024-11-20 13:51:29.379829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.103 13:51:30 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.103 13:51:30 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:30:33.103 13:51:30 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:30:33.362 { 00:30:33.362 "version": "SPDK v25.01-pre git sha1 f9d18d578", 00:30:33.362 "fields": { 00:30:33.362 "major": 25, 00:30:33.362 "minor": 1, 00:30:33.362 "patch": 0, 00:30:33.363 "suffix": "-pre", 00:30:33.363 "commit": "f9d18d578" 00:30:33.363 } 00:30:33.363 } 00:30:33.621 13:51:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:30:33.621 13:51:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:30:33.621 13:51:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:30:33.621 13:51:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:30:33.621 13:51:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:30:33.621 13:51:30 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.621 13:51:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:30:33.621 13:51:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:30:33.621 13:51:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:30:33.621 13:51:30 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.621 13:51:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:30:33.621 13:51:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:30:33.621 13:51:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:30:33.621 13:51:30 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:30:33.621 13:51:30 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:30:33.621 13:51:30 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:33.621 13:51:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.621 13:51:30 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:33.621 13:51:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.621 13:51:30 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:33.621 13:51:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.621 13:51:30 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:33.621 13:51:30 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:33.621 13:51:30 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:30:33.880 request: 00:30:33.880 { 00:30:33.880 "method": "env_dpdk_get_mem_stats", 00:30:33.880 "req_id": 1 00:30:33.880 } 00:30:33.880 Got JSON-RPC error response 00:30:33.880 response: 00:30:33.880 { 00:30:33.880 "code": -32601, 00:30:33.880 "message": "Method not found" 00:30:33.880 } 00:30:33.880 13:51:30 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:30:33.880 13:51:30 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:33.880 13:51:30 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:33.880 13:51:30 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:33.880 13:51:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61095 00:30:33.880 13:51:30 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61095 ']' 00:30:33.880 13:51:30 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61095 00:30:33.880 13:51:30 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:30:33.880 13:51:30 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.880 13:51:30 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61095 00:30:33.880 killing process with pid 61095 00:30:33.880 13:51:31 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:33.880 13:51:31 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:33.880 13:51:31 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61095' 00:30:33.880 13:51:31 app_cmdline -- common/autotest_common.sh@973 -- # kill 61095 00:30:33.880 13:51:31 app_cmdline -- common/autotest_common.sh@978 -- # wait 61095 00:30:37.204 00:30:37.204 real 0m5.366s 00:30:37.204 user 0m5.762s 00:30:37.204 sys 0m0.701s 00:30:37.204 13:51:34 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.204 ************************************ 00:30:37.204 END TEST app_cmdline 00:30:37.204 ************************************ 00:30:37.204 13:51:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:30:37.204 13:51:34 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:30:37.204 13:51:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:37.204 13:51:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:37.204 13:51:34 -- common/autotest_common.sh@10 -- # set +x 00:30:37.204 ************************************ 00:30:37.204 START TEST version 00:30:37.204 ************************************ 00:30:37.204 13:51:34 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:30:37.204 * Looking for test storage... 00:30:37.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:30:37.204 13:51:34 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:37.204 13:51:34 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:37.204 13:51:34 version -- common/autotest_common.sh@1693 -- # lcov --version 00:30:37.204 13:51:34 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:37.204 13:51:34 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:37.204 13:51:34 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:37.204 13:51:34 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:37.204 13:51:34 version -- scripts/common.sh@336 -- # IFS=.-: 00:30:37.204 13:51:34 version -- scripts/common.sh@336 -- # read -ra ver1 00:30:37.204 13:51:34 version -- scripts/common.sh@337 -- # IFS=.-: 00:30:37.204 13:51:34 version -- scripts/common.sh@337 -- # read -ra ver2 00:30:37.204 13:51:34 version -- scripts/common.sh@338 -- # local 'op=<' 00:30:37.204 13:51:34 version -- scripts/common.sh@340 -- # ver1_l=2 00:30:37.204 13:51:34 version -- scripts/common.sh@341 -- # ver2_l=1 00:30:37.204 13:51:34 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:37.204 13:51:34 version -- scripts/common.sh@344 -- # case "$op" in 00:30:37.204 13:51:34 version -- scripts/common.sh@345 -- # : 1 00:30:37.204 13:51:34 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:37.204 13:51:34 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:37.204 13:51:34 version -- scripts/common.sh@365 -- # decimal 1 00:30:37.204 13:51:34 version -- scripts/common.sh@353 -- # local d=1 00:30:37.204 13:51:34 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:37.204 13:51:34 version -- scripts/common.sh@355 -- # echo 1 00:30:37.204 13:51:34 version -- scripts/common.sh@365 -- # ver1[v]=1 00:30:37.204 13:51:34 version -- scripts/common.sh@366 -- # decimal 2 00:30:37.204 13:51:34 version -- scripts/common.sh@353 -- # local d=2 00:30:37.204 13:51:34 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:37.204 13:51:34 version -- scripts/common.sh@355 -- # echo 2 00:30:37.204 13:51:34 version -- scripts/common.sh@366 -- # ver2[v]=2 00:30:37.204 13:51:34 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:37.204 13:51:34 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:37.204 13:51:34 version -- scripts/common.sh@368 -- # return 0 00:30:37.204 13:51:34 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:37.204 13:51:34 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:37.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.204 --rc genhtml_branch_coverage=1 00:30:37.204 --rc genhtml_function_coverage=1 00:30:37.204 --rc genhtml_legend=1 00:30:37.204 --rc geninfo_all_blocks=1 00:30:37.204 --rc geninfo_unexecuted_blocks=1 00:30:37.204 00:30:37.204 ' 00:30:37.204 13:51:34 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:37.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.204 --rc genhtml_branch_coverage=1 00:30:37.204 --rc genhtml_function_coverage=1 00:30:37.204 --rc genhtml_legend=1 00:30:37.204 --rc geninfo_all_blocks=1 00:30:37.204 --rc geninfo_unexecuted_blocks=1 00:30:37.204 00:30:37.204 ' 00:30:37.204 13:51:34 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:37.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.204 --rc genhtml_branch_coverage=1 00:30:37.204 --rc genhtml_function_coverage=1 00:30:37.204 --rc genhtml_legend=1 00:30:37.204 --rc geninfo_all_blocks=1 00:30:37.204 --rc geninfo_unexecuted_blocks=1 00:30:37.204 00:30:37.204 ' 00:30:37.204 13:51:34 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:37.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.204 --rc genhtml_branch_coverage=1 00:30:37.204 --rc genhtml_function_coverage=1 00:30:37.204 --rc genhtml_legend=1 00:30:37.204 --rc geninfo_all_blocks=1 00:30:37.204 --rc geninfo_unexecuted_blocks=1 00:30:37.204 00:30:37.204 ' 00:30:37.204 13:51:34 version -- app/version.sh@17 -- # get_header_version major 00:30:37.204 13:51:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:30:37.204 13:51:34 version -- app/version.sh@14 -- # cut -f2 00:30:37.204 13:51:34 version -- app/version.sh@14 -- # tr -d '"' 00:30:37.204 13:51:34 version -- app/version.sh@17 -- # major=25 00:30:37.204 13:51:34 version -- app/version.sh@18 -- # get_header_version minor 00:30:37.204 13:51:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:30:37.204 13:51:34 version -- app/version.sh@14 -- # cut -f2 00:30:37.204 13:51:34 version -- app/version.sh@14 -- # tr -d '"' 00:30:37.204 13:51:34 version -- app/version.sh@18 -- # minor=1 00:30:37.204 13:51:34 version -- app/version.sh@19 -- # get_header_version patch 00:30:37.204 13:51:34 version -- app/version.sh@14 -- # tr -d '"' 00:30:37.204 13:51:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:30:37.204 13:51:34 version -- app/version.sh@14 -- # cut -f2 00:30:37.204 13:51:34 version -- app/version.sh@19 -- # patch=0 00:30:37.204 13:51:34 version -- app/version.sh@20 -- # get_header_version suffix 00:30:37.204 13:51:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:30:37.204 13:51:34 version -- app/version.sh@14 -- # cut -f2 00:30:37.204 13:51:34 version -- app/version.sh@14 -- # tr -d '"' 00:30:37.204 13:51:34 version -- app/version.sh@20 -- # suffix=-pre 00:30:37.204 13:51:34 version -- app/version.sh@22 -- # version=25.1 00:30:37.204 13:51:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:30:37.204 13:51:34 version -- app/version.sh@28 -- # version=25.1rc0 00:30:37.204 13:51:34 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:30:37.204 13:51:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:30:37.204 13:51:34 version -- app/version.sh@30 -- # py_version=25.1rc0 00:30:37.204 13:51:34 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:30:37.204 00:30:37.204 real 0m0.250s 00:30:37.204 user 0m0.150s 00:30:37.204 sys 0m0.142s 00:30:37.204 ************************************ 00:30:37.204 END TEST version 00:30:37.204 ************************************ 00:30:37.204 13:51:34 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.204 13:51:34 version -- common/autotest_common.sh@10 -- # set +x 00:30:37.204 13:51:34 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:30:37.204 13:51:34 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:30:37.204 13:51:34 -- spdk/autotest.sh@194 -- # uname -s 00:30:37.204 13:51:34 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:30:37.205 13:51:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:30:37.205 13:51:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:30:37.205 13:51:34 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:30:37.205 13:51:34 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:30:37.205 13:51:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:37.205 13:51:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:37.205 13:51:34 -- common/autotest_common.sh@10 -- # set +x 00:30:37.205 ************************************ 00:30:37.205 START TEST blockdev_nvme 00:30:37.205 ************************************ 00:30:37.205 13:51:34 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:30:37.205 * Looking for test storage... 00:30:37.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:30:37.205 13:51:34 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:37.205 13:51:34 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:30:37.205 13:51:34 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:37.490 13:51:34 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:37.490 13:51:34 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:30:37.490 13:51:34 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:37.490 13:51:34 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:37.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.490 --rc genhtml_branch_coverage=1 00:30:37.490 --rc genhtml_function_coverage=1 00:30:37.490 --rc genhtml_legend=1 00:30:37.490 --rc geninfo_all_blocks=1 00:30:37.490 --rc geninfo_unexecuted_blocks=1 00:30:37.490 00:30:37.490 ' 00:30:37.490 13:51:34 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:37.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.490 --rc genhtml_branch_coverage=1 00:30:37.490 --rc genhtml_function_coverage=1 00:30:37.490 --rc genhtml_legend=1 00:30:37.490 --rc geninfo_all_blocks=1 00:30:37.490 --rc geninfo_unexecuted_blocks=1 00:30:37.490 00:30:37.490 ' 00:30:37.490 13:51:34 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:37.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.490 --rc genhtml_branch_coverage=1 00:30:37.490 --rc genhtml_function_coverage=1 00:30:37.490 --rc genhtml_legend=1 00:30:37.490 --rc geninfo_all_blocks=1 00:30:37.490 --rc geninfo_unexecuted_blocks=1 00:30:37.490 00:30:37.490 ' 00:30:37.491 13:51:34 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:37.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.491 --rc genhtml_branch_coverage=1 00:30:37.491 --rc genhtml_function_coverage=1 00:30:37.491 --rc genhtml_legend=1 00:30:37.491 --rc geninfo_all_blocks=1 00:30:37.491 --rc geninfo_unexecuted_blocks=1 00:30:37.491 00:30:37.491 ' 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:30:37.491 13:51:34 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61289 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61289 00:30:37.491 13:51:34 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61289 ']' 00:30:37.491 13:51:34 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.491 13:51:34 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.491 13:51:34 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.491 13:51:34 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.491 13:51:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:30:37.491 13:51:34 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:37.491 [2024-11-20 13:51:34.732508] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:37.491 [2024-11-20 13:51:34.733218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61289 ] 00:30:37.749 [2024-11-20 13:51:34.923080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.006 [2024-11-20 13:51:35.094632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.936 13:51:36 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.936 13:51:36 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:30:38.936 13:51:36 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:30:38.936 13:51:36 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:30:38.936 13:51:36 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:30:38.936 13:51:36 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:30:38.936 13:51:36 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:39.194 13:51:36 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:30:39.194 13:51:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.194 13:51:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.452 13:51:36 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.452 13:51:36 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:30:39.452 13:51:36 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.452 13:51:36 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.452 13:51:36 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.452 13:51:36 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:30:39.452 13:51:36 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.452 13:51:36 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:30:39.452 13:51:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:30:39.712 13:51:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.712 13:51:36 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:30:39.712 13:51:36 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:30:39.713 13:51:36 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "39d7df78-c9fb-4bab-826b-89baa778aa22"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "39d7df78-c9fb-4bab-826b-89baa778aa22",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "6ffda5e8-744f-4792-9b18-32ebf687c33e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6ffda5e8-744f-4792-9b18-32ebf687c33e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ed2041be-7e6a-438a-9deb-c8b52b5f0d34"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ed2041be-7e6a-438a-9deb-c8b52b5f0d34",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "3bd2c22f-bc17-4860-b18b-fe83de91f3a4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3bd2c22f-bc17-4860-b18b-fe83de91f3a4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9b3b745a-6933-4422-8fa5-9a2891149822"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9b3b745a-6933-4422-8fa5-9a2891149822",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "850a4574-dd65-4fc2-9cbf-5a87d8aae3a7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "850a4574-dd65-4fc2-9cbf-5a87d8aae3a7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:30:39.713 13:51:36 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:30:39.713 13:51:36 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:30:39.713 13:51:36 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:30:39.713 13:51:36 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61289 00:30:39.713 13:51:36 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61289 ']' 00:30:39.713 13:51:36 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61289 00:30:39.713 13:51:36 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:30:39.713 13:51:36 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:39.713 13:51:36 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61289 00:30:39.713 killing process with pid 61289 00:30:39.713 13:51:36 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:39.713 13:51:36 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:39.713 13:51:36 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61289' 00:30:39.713 13:51:36 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61289 00:30:39.713 13:51:36 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61289 00:30:43.014 13:51:39 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:43.014 13:51:39 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:30:43.014 13:51:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:30:43.014 13:51:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.014 13:51:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:30:43.014 ************************************ 00:30:43.014 START TEST bdev_hello_world 00:30:43.014 ************************************ 00:30:43.014 13:51:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:30:43.014 [2024-11-20 13:51:40.027887] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:43.014 [2024-11-20 13:51:40.028366] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61395 ] 00:30:43.014 [2024-11-20 13:51:40.227891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.272 [2024-11-20 13:51:40.401109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.838 [2024-11-20 13:51:41.130854] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:43.838 [2024-11-20 13:51:41.130929] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:30:43.838 [2024-11-20 13:51:41.130982] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:43.838 [2024-11-20 13:51:41.135065] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:43.838 [2024-11-20 13:51:41.135653] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:43.838 [2024-11-20 13:51:41.135698] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:43.838 [2024-11-20 13:51:41.135852] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:43.838 00:30:43.838 [2024-11-20 13:51:41.135891] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:30:45.738 00:30:45.738 real 0m2.634s 00:30:45.738 user 0m2.211s 00:30:45.738 sys 0m0.305s 00:30:45.738 13:51:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:45.738 ************************************ 00:30:45.738 END TEST bdev_hello_world 00:30:45.738 ************************************ 00:30:45.738 13:51:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:30:45.738 13:51:42 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:30:45.738 13:51:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:45.738 13:51:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.738 13:51:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:30:45.738 ************************************ 00:30:45.738 START TEST bdev_bounds 00:30:45.738 ************************************ 00:30:45.738 Process bdevio pid: 61443 00:30:45.738 13:51:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:30:45.738 13:51:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61443 00:30:45.738 13:51:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:45.738 13:51:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:30:45.738 13:51:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61443' 00:30:45.738 13:51:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61443 00:30:45.738 13:51:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61443 ']' 00:30:45.738 13:51:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.738 13:51:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.738 13:51:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.738 13:51:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.739 13:51:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:30:45.739 [2024-11-20 13:51:42.692880] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:45.739 [2024-11-20 13:51:42.693357] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61443 ] 00:30:45.739 [2024-11-20 13:51:42.885225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:46.017 [2024-11-20 13:51:43.087517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.017 [2024-11-20 13:51:43.087631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:46.017 [2024-11-20 13:51:43.087636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.951 13:51:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.951 13:51:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:30:46.951 13:51:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:46.951 I/O targets: 00:30:46.951 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:30:46.951 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:30:46.951 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:30:46.951 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:30:46.951 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:30:46.951 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:30:46.951 00:30:46.951 00:30:46.951 CUnit - A unit testing framework for C - Version 2.1-3 00:30:46.951 http://cunit.sourceforge.net/ 00:30:46.951 00:30:46.951 00:30:46.951 Suite: bdevio tests on: Nvme3n1 00:30:46.951 Test: blockdev write read block ...passed 00:30:46.951 Test: blockdev write zeroes read block ...passed 00:30:46.951 Test: blockdev write zeroes read no split ...passed 00:30:46.951 Test: blockdev write zeroes read split ...passed 00:30:46.951 Test: blockdev write zeroes read split partial ...passed 00:30:46.951 Test: blockdev reset ...[2024-11-20 13:51:44.138193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:30:46.951 [2024-11-20 13:51:44.142819] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:30:46.951 Test: blockdev write read 8 blocks ...uccessful. 00:30:46.951 passed 00:30:46.951 Test: blockdev write read size > 128k ...passed 00:30:46.951 Test: blockdev write read invalid size ...passed 00:30:46.951 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:46.951 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:46.951 Test: blockdev write read max offset ...passed 00:30:46.951 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:46.951 Test: blockdev writev readv 8 blocks ...passed 00:30:46.951 Test: blockdev writev readv 30 x 1block ...passed 00:30:46.951 Test: blockdev writev readv block ...passed 00:30:46.951 Test: blockdev writev readv size > 128k ...passed 00:30:46.951 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:46.951 Test: blockdev comparev and writev ...[2024-11-20 13:51:44.151300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b600a000 len:0x1000 00:30:46.951 [2024-11-20 13:51:44.151366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:46.951 passed 00:30:46.951 Test: blockdev nvme passthru rw ...passed 00:30:46.951 Test: blockdev nvme passthru vendor specific ...passed 00:30:46.951 Test: blockdev nvme admin passthru ...[2024-11-20 13:51:44.152038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:30:46.951 [2024-11-20 13:51:44.152084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:30:46.951 passed 00:30:46.951 Test: blockdev copy ...passed 00:30:46.951 Suite: bdevio tests on: Nvme2n3 00:30:46.951 Test: blockdev write read block ...passed 00:30:46.951 Test: blockdev write zeroes read block ...passed 00:30:46.951 Test: blockdev write zeroes read no split ...passed 00:30:46.951 Test: blockdev write zeroes read split ...passed 00:30:46.951 Test: blockdev write zeroes read split partial ...passed 00:30:46.951 Test: blockdev reset ...[2024-11-20 13:51:44.239678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:30:46.951 passed 00:30:46.951 Test: blockdev write read 8 blocks ...[2024-11-20 13:51:44.244280] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:30:46.951 passed 00:30:46.951 Test: blockdev write read size > 128k ...passed 00:30:46.951 Test: blockdev write read invalid size ...passed 00:30:46.951 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:46.951 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:46.951 Test: blockdev write read max offset ...passed 00:30:46.951 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:46.951 Test: blockdev writev readv 8 blocks ...passed 00:30:46.951 Test: blockdev writev readv 30 x 1block ...passed 00:30:46.951 Test: blockdev writev readv block ...passed 00:30:46.951 Test: blockdev writev readv size > 128k ...passed 00:30:46.951 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:46.951 Test: blockdev comparev and writev ...[2024-11-20 13:51:44.252508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:30:46.951 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x299206000 len:0x1000 00:30:46.951 [2024-11-20 13:51:44.252703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:46.951 passed 00:30:46.951 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:51:44.253448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:30:46.951 [2024-11-20 13:51:44.253506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:30:46.951 passed 00:30:46.951 Test: blockdev nvme admin passthru ...passed 00:30:46.951 Test: blockdev copy ...passed 00:30:46.951 Suite: bdevio tests on: Nvme2n2 00:30:46.951 Test: blockdev write read block ...passed 00:30:46.951 Test: blockdev write zeroes read block ...passed 00:30:47.210 Test: blockdev write zeroes read no split ...passed 00:30:47.210 Test: blockdev write zeroes read split ...passed 00:30:47.210 Test: blockdev write zeroes read split partial ...passed 00:30:47.210 Test: blockdev reset ...[2024-11-20 13:51:44.339327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:30:47.210 [2024-11-20 13:51:44.344265] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:30:47.210 passed 00:30:47.210 Test: blockdev write read 8 blocks ...passed 00:30:47.210 Test: blockdev write read size > 128k ...passed 00:30:47.210 Test: blockdev write read invalid size ...passed 00:30:47.210 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:47.210 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:47.210 Test: blockdev write read max offset ...passed 00:30:47.210 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:47.210 Test: blockdev writev readv 8 blocks ...passed 00:30:47.210 Test: blockdev writev readv 30 x 1block ...passed 00:30:47.210 Test: blockdev writev readv block ...passed 00:30:47.210 Test: blockdev writev readv size > 128k ...passed 00:30:47.210 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:47.210 Test: blockdev comparev and writev ...[2024-11-20 13:51:44.358371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c603c000 len:0x1000 00:30:47.210 [2024-11-20 13:51:44.358439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:47.210 passed 00:30:47.210 Test: blockdev nvme passthru rw ...passed 00:30:47.210 Test: blockdev nvme passthru vendor specific ...passed 00:30:47.210 Test: blockdev nvme admin passthru ...[2024-11-20 13:51:44.359169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:30:47.210 [2024-11-20 13:51:44.359210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:30:47.210 passed 00:30:47.210 Test: blockdev copy ...passed 00:30:47.210 Suite: bdevio tests on: Nvme2n1 00:30:47.210 Test: blockdev write read block ...passed 00:30:47.210 Test: blockdev write zeroes read block ...passed 00:30:47.210 Test: blockdev write zeroes read no split ...passed 00:30:47.210 Test: blockdev write zeroes read split ...passed 00:30:47.210 Test: blockdev write zeroes read split partial ...passed 00:30:47.210 Test: blockdev reset ...[2024-11-20 13:51:44.435903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:30:47.210 [2024-11-20 13:51:44.440783] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:30:47.210 Test: blockdev write read 8 blocks ...uccessful. 00:30:47.210 passed 00:30:47.210 Test: blockdev write read size > 128k ...passed 00:30:47.210 Test: blockdev write read invalid size ...passed 00:30:47.210 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:47.210 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:47.210 Test: blockdev write read max offset ...passed 00:30:47.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:47.211 Test: blockdev writev readv 8 blocks ...passed 00:30:47.211 Test: blockdev writev readv 30 x 1block ...passed 00:30:47.211 Test: blockdev writev readv block ...passed 00:30:47.211 Test: blockdev writev readv size > 128k ...passed 00:30:47.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:47.211 Test: blockdev comparev and writev ...[2024-11-20 13:51:44.449576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6038000 len:0x1000 00:30:47.211 [2024-11-20 13:51:44.449642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:47.211 passed 00:30:47.211 Test: blockdev nvme passthru rw ...passed 00:30:47.211 Test: blockdev nvme passthru vendor specific ...passed 00:30:47.211 Test: blockdev nvme admin passthru ...[2024-11-20 13:51:44.450415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:30:47.211 [2024-11-20 13:51:44.450455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:30:47.211 passed 00:30:47.211 Test: blockdev copy ...passed 00:30:47.211 Suite: bdevio tests on: Nvme1n1 00:30:47.211 Test: blockdev write read block ...passed 00:30:47.211 Test: blockdev write zeroes read block ...passed 00:30:47.211 Test: blockdev write zeroes read no split ...passed 00:30:47.211 Test: blockdev write zeroes read split ...passed 00:30:47.469 Test: blockdev write zeroes read split partial ...passed 00:30:47.469 Test: blockdev reset ...[2024-11-20 13:51:44.534793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:30:47.469 [2024-11-20 13:51:44.539117] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:30:47.469 passed 00:30:47.469 Test: blockdev write read 8 blocks ...passed 00:30:47.469 Test: blockdev write read size > 128k ...passed 00:30:47.469 Test: blockdev write read invalid size ...passed 00:30:47.469 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:47.469 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:47.469 Test: blockdev write read max offset ...passed 00:30:47.469 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:47.469 Test: blockdev writev readv 8 blocks ...passed 00:30:47.469 Test: blockdev writev readv 30 x 1block ...passed 00:30:47.469 Test: blockdev writev readv block ...passed 00:30:47.469 Test: blockdev writev readv size > 128k ...passed 00:30:47.469 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:47.469 Test: blockdev comparev and writev ...[2024-11-20 13:51:44.548642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6034000 len:0x1000 00:30:47.469 [2024-11-20 13:51:44.548708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:47.469 passed 00:30:47.469 Test: blockdev nvme passthru rw ...passed 00:30:47.469 Test: blockdev nvme passthru vendor specific ...passed 00:30:47.469 Test: blockdev nvme admin passthru ...[2024-11-20 13:51:44.549473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:30:47.469 [2024-11-20 13:51:44.549524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:30:47.469 passed 00:30:47.470 Test: blockdev copy ...passed 00:30:47.470 Suite: bdevio tests on: Nvme0n1 00:30:47.470 Test: blockdev write read block ...passed 00:30:47.470 Test: blockdev write zeroes read block ...passed 00:30:47.470 Test: blockdev write zeroes read no split ...passed 00:30:47.470 Test: blockdev write zeroes read split ...passed 00:30:47.470 Test: blockdev write zeroes read split partial ...passed 00:30:47.470 Test: blockdev reset ...[2024-11-20 13:51:44.634698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:30:47.470 passed 00:30:47.470 Test: blockdev write read 8 blocks ...[2024-11-20 13:51:44.638962] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:30:47.470 passed 00:30:47.470 Test: blockdev write read size > 128k ...passed 00:30:47.470 Test: blockdev write read invalid size ...passed 00:30:47.470 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:47.470 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:47.470 Test: blockdev write read max offset ...passed 00:30:47.470 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:47.470 Test: blockdev writev readv 8 blocks ...passed 00:30:47.470 Test: blockdev writev readv 30 x 1block ...passed 00:30:47.470 Test: blockdev writev readv block ...passed 00:30:47.470 Test: blockdev writev readv size > 128k ...passed 00:30:47.470 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:47.470 Test: blockdev comparev and writev ...passed 00:30:47.470 Test: blockdev nvme passthru rw ...[2024-11-20 13:51:44.647410] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:30:47.470 separate metadata which is not supported yet. 00:30:47.470 passed 00:30:47.470 Test: blockdev nvme passthru vendor specific ...passed 00:30:47.470 Test: blockdev nvme admin passthru ...[2024-11-20 13:51:44.647954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:30:47.470 [2024-11-20 13:51:44.648013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:30:47.470 passed 00:30:47.470 Test: blockdev copy ...passed 00:30:47.470 00:30:47.470 Run Summary: Type Total Ran Passed Failed Inactive 00:30:47.470 suites 6 6 n/a 0 0 00:30:47.470 tests 138 138 138 0 0 00:30:47.470 asserts 893 893 893 0 n/a 00:30:47.470 00:30:47.470 Elapsed time = 1.638 seconds 00:30:47.470 0 00:30:47.470 13:51:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61443 00:30:47.470 13:51:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61443 ']' 00:30:47.470 13:51:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61443 00:30:47.470 13:51:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:30:47.470 13:51:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:47.470 13:51:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61443 00:30:47.470 13:51:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:47.470 13:51:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:47.470 13:51:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61443' 00:30:47.470 killing process with pid 61443 00:30:47.470 13:51:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61443 00:30:47.470 13:51:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61443 00:30:48.849 13:51:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:30:48.849 ************************************ 00:30:48.849 END TEST bdev_bounds 00:30:48.849 ************************************ 00:30:48.849 00:30:48.849 real 0m3.290s 00:30:48.849 user 0m8.680s 00:30:48.849 sys 0m0.484s 00:30:48.849 13:51:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.849 13:51:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:30:48.849 13:51:45 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:30:48.849 13:51:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:48.849 13:51:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.849 13:51:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:30:48.849 ************************************ 00:30:48.849 START TEST bdev_nbd 00:30:48.849 ************************************ 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61513 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61513 /var/tmp/spdk-nbd.sock 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61513 ']' 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:48.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.849 13:51:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:30:48.849 [2024-11-20 13:51:46.049010] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:48.850 [2024-11-20 13:51:46.049185] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.108 [2024-11-20 13:51:46.245583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.108 [2024-11-20 13:51:46.367307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.045 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:50.045 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:30:50.046 13:51:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:30:50.046 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:50.046 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:30:50.046 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:30:50.046 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:30:50.046 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:50.046 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:30:50.046 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:30:50.046 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:30:50.046 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:30:50.046 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:30:50.046 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:30:50.046 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:30:50.357 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:30:50.357 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:30:50.357 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:30:50.357 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:50.357 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:50.357 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:50.357 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:50.357 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:50.357 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:50.357 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:50.357 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:50.357 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:50.357 1+0 records in 00:30:50.357 1+0 records out 00:30:50.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486637 s, 8.4 MB/s 00:30:50.358 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:50.358 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:50.358 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:50.358 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:50.358 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:50.358 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:50.358 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:30:50.358 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:50.616 1+0 records in 00:30:50.616 1+0 records out 00:30:50.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728812 s, 5.6 MB/s 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:30:50.616 13:51:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:50.875 1+0 records in 00:30:50.875 1+0 records out 00:30:50.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623213 s, 6.6 MB/s 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:30:50.875 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:30:51.134 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:51.394 1+0 records in 00:30:51.394 1+0 records out 00:30:51.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570199 s, 7.2 MB/s 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:30:51.394 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:30:51.652 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:30:51.652 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:30:51.652 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:51.653 1+0 records in 00:30:51.653 1+0 records out 00:30:51.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000826835 s, 5.0 MB/s 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:30:51.653 13:51:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:51.911 1+0 records in 00:30:51.911 1+0 records out 00:30:51.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683546 s, 6.0 MB/s 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:51.911 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:30:51.912 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:52.170 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:30:52.170 { 00:30:52.170 "nbd_device": "/dev/nbd0", 00:30:52.170 "bdev_name": "Nvme0n1" 00:30:52.170 }, 00:30:52.170 { 00:30:52.170 "nbd_device": "/dev/nbd1", 00:30:52.170 "bdev_name": "Nvme1n1" 00:30:52.170 }, 00:30:52.170 { 00:30:52.170 "nbd_device": "/dev/nbd2", 00:30:52.170 "bdev_name": "Nvme2n1" 00:30:52.170 }, 00:30:52.170 { 00:30:52.170 "nbd_device": "/dev/nbd3", 00:30:52.170 "bdev_name": "Nvme2n2" 00:30:52.170 }, 00:30:52.170 { 00:30:52.170 "nbd_device": "/dev/nbd4", 00:30:52.170 "bdev_name": "Nvme2n3" 00:30:52.170 }, 00:30:52.170 { 00:30:52.170 "nbd_device": "/dev/nbd5", 00:30:52.170 "bdev_name": "Nvme3n1" 00:30:52.170 } 00:30:52.170 ]' 00:30:52.170 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:30:52.170 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:30:52.170 { 00:30:52.170 "nbd_device": "/dev/nbd0", 00:30:52.170 "bdev_name": "Nvme0n1" 00:30:52.170 }, 00:30:52.170 { 00:30:52.170 "nbd_device": "/dev/nbd1", 00:30:52.170 "bdev_name": "Nvme1n1" 00:30:52.170 }, 00:30:52.170 { 00:30:52.170 "nbd_device": "/dev/nbd2", 00:30:52.170 "bdev_name": "Nvme2n1" 00:30:52.170 }, 00:30:52.170 { 00:30:52.170 "nbd_device": "/dev/nbd3", 00:30:52.170 "bdev_name": "Nvme2n2" 00:30:52.170 }, 00:30:52.170 { 00:30:52.170 "nbd_device": "/dev/nbd4", 00:30:52.170 "bdev_name": "Nvme2n3" 00:30:52.170 }, 00:30:52.170 { 00:30:52.170 "nbd_device": "/dev/nbd5", 00:30:52.170 "bdev_name": "Nvme3n1" 00:30:52.170 } 00:30:52.170 ]' 00:30:52.170 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:30:52.170 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:30:52.170 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:52.170 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:30:52.170 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:52.170 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:52.170 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:52.170 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:52.738 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:52.738 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:52.738 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:52.738 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:52.738 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:52.738 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:52.738 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:52.738 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:52.738 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:52.738 13:51:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:52.997 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:52.997 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:52.997 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:52.997 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:52.997 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:52.997 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:52.997 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:52.997 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:52.997 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:52.997 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:30:53.255 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:30:53.255 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:30:53.255 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:30:53.255 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:53.255 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:53.255 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:30:53.255 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:53.255 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:53.255 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:53.255 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:30:53.514 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:30:53.514 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:30:53.514 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:30:53.514 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:53.514 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:53.514 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:30:53.514 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:53.514 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:53.514 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:53.514 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:30:53.773 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:30:53.773 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:30:53.773 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:30:53.773 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:53.773 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:53.773 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:30:53.773 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:53.773 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:53.773 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:53.773 13:51:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:30:54.031 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:30:54.031 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:30:54.031 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:30:54.031 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:54.031 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:54.031 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:30:54.031 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:54.031 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:54.031 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:54.031 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:54.031 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:54.290 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:54.290 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:54.290 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:30:54.549 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:30:54.817 /dev/nbd0 00:30:54.817 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:54.817 13:51:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:54.817 13:51:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:54.817 13:51:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:54.817 13:51:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:54.817 13:51:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:54.817 13:51:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:54.817 13:51:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:54.817 13:51:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:54.817 13:51:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:54.817 13:51:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:54.817 1+0 records in 00:30:54.817 1+0 records out 00:30:54.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444096 s, 9.2 MB/s 00:30:54.817 13:51:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:54.817 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:54.817 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:54.817 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:54.817 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:54.817 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:54.817 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:30:54.817 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:30:55.091 /dev/nbd1 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:55.091 1+0 records in 00:30:55.091 1+0 records out 00:30:55.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600033 s, 6.8 MB/s 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:30:55.091 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:30:55.350 /dev/nbd10 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:55.350 1+0 records in 00:30:55.350 1+0 records out 00:30:55.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576661 s, 7.1 MB/s 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:30:55.350 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:30:55.609 /dev/nbd11 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:55.609 1+0 records in 00:30:55.609 1+0 records out 00:30:55.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452238 s, 9.1 MB/s 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:30:55.609 13:51:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:30:55.868 /dev/nbd12 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:55.869 1+0 records in 00:30:55.869 1+0 records out 00:30:55.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658529 s, 6.2 MB/s 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:30:55.869 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:30:56.127 /dev/nbd13 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:56.127 1+0 records in 00:30:56.127 1+0 records out 00:30:56.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555574 s, 7.4 MB/s 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:56.127 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:56.696 { 00:30:56.696 "nbd_device": "/dev/nbd0", 00:30:56.696 "bdev_name": "Nvme0n1" 00:30:56.696 }, 00:30:56.696 { 00:30:56.696 "nbd_device": "/dev/nbd1", 00:30:56.696 "bdev_name": "Nvme1n1" 00:30:56.696 }, 00:30:56.696 { 00:30:56.696 "nbd_device": "/dev/nbd10", 00:30:56.696 "bdev_name": "Nvme2n1" 00:30:56.696 }, 00:30:56.696 { 00:30:56.696 "nbd_device": "/dev/nbd11", 00:30:56.696 "bdev_name": "Nvme2n2" 00:30:56.696 }, 00:30:56.696 { 00:30:56.696 "nbd_device": "/dev/nbd12", 00:30:56.696 "bdev_name": "Nvme2n3" 00:30:56.696 }, 00:30:56.696 { 00:30:56.696 "nbd_device": "/dev/nbd13", 00:30:56.696 "bdev_name": "Nvme3n1" 00:30:56.696 } 00:30:56.696 ]' 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:56.696 { 00:30:56.696 "nbd_device": "/dev/nbd0", 00:30:56.696 "bdev_name": "Nvme0n1" 00:30:56.696 }, 00:30:56.696 { 00:30:56.696 "nbd_device": "/dev/nbd1", 00:30:56.696 "bdev_name": "Nvme1n1" 00:30:56.696 }, 00:30:56.696 { 00:30:56.696 "nbd_device": "/dev/nbd10", 00:30:56.696 "bdev_name": "Nvme2n1" 00:30:56.696 }, 00:30:56.696 { 00:30:56.696 "nbd_device": "/dev/nbd11", 00:30:56.696 "bdev_name": "Nvme2n2" 00:30:56.696 }, 00:30:56.696 { 00:30:56.696 "nbd_device": "/dev/nbd12", 00:30:56.696 "bdev_name": "Nvme2n3" 00:30:56.696 }, 00:30:56.696 { 00:30:56.696 "nbd_device": "/dev/nbd13", 00:30:56.696 "bdev_name": "Nvme3n1" 00:30:56.696 } 00:30:56.696 ]' 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:56.696 /dev/nbd1 00:30:56.696 /dev/nbd10 00:30:56.696 /dev/nbd11 00:30:56.696 /dev/nbd12 00:30:56.696 /dev/nbd13' 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:56.696 /dev/nbd1 00:30:56.696 /dev/nbd10 00:30:56.696 /dev/nbd11 00:30:56.696 /dev/nbd12 00:30:56.696 /dev/nbd13' 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:30:56.696 256+0 records in 00:30:56.696 256+0 records out 00:30:56.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00726726 s, 144 MB/s 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:56.696 256+0 records in 00:30:56.696 256+0 records out 00:30:56.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127761 s, 8.2 MB/s 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:56.696 13:51:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:56.955 256+0 records in 00:30:56.955 256+0 records out 00:30:56.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137523 s, 7.6 MB/s 00:30:56.955 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:56.955 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:30:56.955 256+0 records in 00:30:56.955 256+0 records out 00:30:56.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133809 s, 7.8 MB/s 00:30:56.955 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:56.955 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:30:57.214 256+0 records in 00:30:57.214 256+0 records out 00:30:57.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133735 s, 7.8 MB/s 00:30:57.214 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:57.214 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:30:57.214 256+0 records in 00:30:57.214 256+0 records out 00:30:57.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134537 s, 7.8 MB/s 00:30:57.214 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:57.214 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:30:57.472 256+0 records in 00:30:57.472 256+0 records out 00:30:57.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135518 s, 7.7 MB/s 00:30:57.472 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:30:57.472 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:57.473 13:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:57.732 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:57.732 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:57.732 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:57.732 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:57.732 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:57.732 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:57.732 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:57.732 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:57.732 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:57.732 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:58.299 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:58.299 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:58.299 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:58.299 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:58.299 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:58.299 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:58.299 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:58.299 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:58.299 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:58.299 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:30:58.557 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:30:58.557 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:30:58.557 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:30:58.557 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:58.557 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:58.557 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:30:58.557 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:58.557 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:58.557 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:58.557 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:30:58.816 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:30:58.816 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:30:58.816 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:30:58.816 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:58.816 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:58.816 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:30:58.816 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:58.816 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:58.816 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:58.816 13:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:30:59.074 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:30:59.074 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:30:59.074 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:30:59.074 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:59.074 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:59.074 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:30:59.074 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:59.074 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:59.074 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:59.074 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:30:59.332 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:30:59.332 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:30:59.332 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:30:59.332 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:59.332 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:59.332 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:30:59.332 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:59.332 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:59.332 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:59.332 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:59.332 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:59.590 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:30:59.849 13:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:00.107 malloc_lvol_verify 00:31:00.107 13:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:00.366 4984d5a8-5b4b-4dc5-a898-e5eea16137ff 00:31:00.366 13:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:00.625 3a6b0d44-840f-4020-af3e-9119c502cd81 00:31:00.625 13:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:00.885 /dev/nbd0 00:31:00.885 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:31:00.885 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:31:00.885 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:31:00.885 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:31:00.885 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:31:00.885 mke2fs 1.47.0 (5-Feb-2023) 00:31:00.885 Discarding device blocks: 0/4096 done 00:31:00.885 Creating filesystem with 4096 1k blocks and 1024 inodes 00:31:00.885 00:31:00.885 Allocating group tables: 0/1 done 00:31:00.885 Writing inode tables: 0/1 done 00:31:00.885 Creating journal (1024 blocks): done 00:31:00.885 Writing superblocks and filesystem accounting information: 0/1 done 00:31:00.885 00:31:00.885 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:00.885 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:00.885 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:00.885 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:00.885 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:00.885 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:00.885 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:01.453 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:01.453 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:01.453 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61513 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61513 ']' 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61513 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61513 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:01.454 killing process with pid 61513 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61513' 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61513 00:31:01.454 13:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61513 00:31:02.828 13:52:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:31:02.828 00:31:02.828 real 0m14.137s 00:31:02.828 user 0m19.140s 00:31:02.828 sys 0m5.446s 00:31:02.828 13:52:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:02.828 13:52:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:31:02.828 ************************************ 00:31:02.828 END TEST bdev_nbd 00:31:02.828 ************************************ 00:31:02.828 13:52:00 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:31:02.828 13:52:00 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:31:02.828 skipping fio tests on NVMe due to multi-ns failures. 00:31:02.828 13:52:00 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:31:02.828 13:52:00 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:02.828 13:52:00 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:02.828 13:52:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:31:02.828 13:52:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:02.828 13:52:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:31:02.828 ************************************ 00:31:02.828 START TEST bdev_verify 00:31:02.828 ************************************ 00:31:02.828 13:52:00 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:03.087 [2024-11-20 13:52:00.247547] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:03.087 [2024-11-20 13:52:00.247719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61932 ] 00:31:03.345 [2024-11-20 13:52:00.445371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:03.345 [2024-11-20 13:52:00.587036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.345 [2024-11-20 13:52:00.587080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.280 Running I/O for 5 seconds... 00:31:06.597 17024.00 IOPS, 66.50 MiB/s [2024-11-20T13:52:04.851Z] 17632.00 IOPS, 68.88 MiB/s [2024-11-20T13:52:05.785Z] 17365.33 IOPS, 67.83 MiB/s [2024-11-20T13:52:06.721Z] 17216.00 IOPS, 67.25 MiB/s 00:31:09.398 Latency(us) 00:31:09.398 [2024-11-20T13:52:06.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.398 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:09.398 Verification LBA range: start 0x0 length 0xbd0bd 00:31:09.398 Nvme0n1 : 5.06 1367.02 5.34 0.00 0.00 93421.53 16976.94 89378.62 00:31:09.398 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:09.398 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:31:09.398 Nvme0n1 : 5.07 1414.79 5.53 0.00 0.00 89667.33 20846.69 77394.90 00:31:09.398 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:09.398 Verification LBA range: start 0x0 length 0xa0000 00:31:09.398 Nvme1n1 : 5.06 1366.42 5.34 0.00 0.00 93313.42 19972.88 81888.79 00:31:09.398 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:09.398 Verification LBA range: start 0xa0000 length 0xa0000 00:31:09.398 Nvme1n1 : 5.08 1424.03 5.56 0.00 0.00 88961.23 4962.01 74398.96 00:31:09.398 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:09.398 Verification LBA range: start 0x0 length 0x80000 00:31:09.398 Nvme2n1 : 5.06 1365.70 5.33 0.00 0.00 93158.82 19848.05 78393.54 00:31:09.398 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:09.398 Verification LBA range: start 0x80000 length 0x80000 00:31:09.398 Nvme2n1 : 5.08 1423.62 5.56 0.00 0.00 88827.67 5336.50 80390.83 00:31:09.398 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:09.398 Verification LBA range: start 0x0 length 0x80000 00:31:09.398 Nvme2n2 : 5.07 1364.34 5.33 0.00 0.00 93046.37 22968.81 82887.44 00:31:09.398 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:09.398 Verification LBA range: start 0x80000 length 0x80000 00:31:09.398 Nvme2n2 : 5.08 1423.24 5.56 0.00 0.00 88757.48 5492.54 83386.76 00:31:09.398 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:09.398 Verification LBA range: start 0x0 length 0x80000 00:31:09.398 Nvme2n3 : 5.07 1363.14 5.32 0.00 0.00 92935.77 20721.86 86882.01 00:31:09.398 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:09.398 Verification LBA range: start 0x80000 length 0x80000 00:31:09.398 Nvme2n3 : 5.06 1416.31 5.53 0.00 0.00 90095.87 16352.79 84385.40 00:31:09.398 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:09.398 Verification LBA range: start 0x0 length 0x20000 00:31:09.398 Nvme3n1 : 5.07 1362.57 5.32 0.00 0.00 92804.85 13793.77 88879.30 00:31:09.398 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:09.398 Verification LBA range: start 0x20000 length 0x20000 00:31:09.398 Nvme3n1 : 5.06 1415.39 5.53 0.00 0.00 89836.52 19099.06 80390.83 00:31:09.398 [2024-11-20T13:52:06.721Z] =================================================================================================================== 00:31:09.398 [2024-11-20T13:52:06.721Z] Total : 16706.56 65.26 0.00 0.00 91196.09 4962.01 89378.62 00:31:11.302 ************************************ 00:31:11.302 END TEST bdev_verify 00:31:11.302 ************************************ 00:31:11.302 00:31:11.302 real 0m8.127s 00:31:11.302 user 0m14.955s 00:31:11.302 sys 0m0.336s 00:31:11.302 13:52:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.302 13:52:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:31:11.302 13:52:08 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:11.302 13:52:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:31:11.302 13:52:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:11.302 13:52:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:31:11.302 ************************************ 00:31:11.302 START TEST bdev_verify_big_io 00:31:11.302 ************************************ 00:31:11.302 13:52:08 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:11.302 [2024-11-20 13:52:08.429916] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:11.302 [2024-11-20 13:52:08.430104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62041 ] 00:31:11.561 [2024-11-20 13:52:08.628628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:11.561 [2024-11-20 13:52:08.765841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.561 [2024-11-20 13:52:08.765902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.497 Running I/O for 5 seconds... 00:31:18.354 1864.00 IOPS, 116.50 MiB/s [2024-11-20T13:52:15.677Z] 2859.50 IOPS, 178.72 MiB/s 00:31:18.354 Latency(us) 00:31:18.354 [2024-11-20T13:52:15.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.354 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:18.354 Verification LBA range: start 0x0 length 0xbd0b 00:31:18.354 Nvme0n1 : 5.85 131.32 8.21 0.00 0.00 950803.99 29459.99 926741.46 00:31:18.354 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:18.354 Verification LBA range: start 0xbd0b length 0xbd0b 00:31:18.354 Nvme0n1 : 5.87 130.87 8.18 0.00 0.00 857300.85 16352.79 854839.10 00:31:18.354 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:18.354 Verification LBA range: start 0x0 length 0xa000 00:31:18.354 Nvme1n1 : 5.85 131.24 8.20 0.00 0.00 924464.60 64911.85 882801.13 00:31:18.354 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:18.354 Verification LBA range: start 0xa000 length 0xa000 00:31:18.355 Nvme1n1 : 5.88 135.12 8.45 0.00 0.00 808779.37 3760.52 882801.13 00:31:18.355 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:18.355 Verification LBA range: start 0x0 length 0x8000 00:31:18.355 Nvme2n1 : 5.85 131.17 8.20 0.00 0.00 898849.00 65411.17 918752.30 00:31:18.355 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:18.355 Verification LBA range: start 0x8000 length 0x8000 00:31:18.355 Nvme2n1 : 5.85 127.10 7.94 0.00 0.00 979777.63 21720.50 1078535.31 00:31:18.355 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:18.355 Verification LBA range: start 0x0 length 0x8000 00:31:18.355 Nvme2n2 : 5.86 131.02 8.19 0.00 0.00 876476.38 67408.46 958698.06 00:31:18.355 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:18.355 Verification LBA range: start 0x8000 length 0x8000 00:31:18.355 Nvme2n2 : 5.85 126.91 7.93 0.00 0.00 953213.75 51430.16 1038589.56 00:31:18.355 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:18.355 Verification LBA range: start 0x0 length 0x8000 00:31:18.355 Nvme2n3 : 5.87 135.33 8.46 0.00 0.00 831386.77 4868.39 1198372.57 00:31:18.355 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:18.355 Verification LBA range: start 0x8000 length 0x8000 00:31:18.355 Nvme2n3 : 5.86 126.46 7.90 0.00 0.00 930759.78 50930.83 810898.77 00:31:18.355 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:18.355 Verification LBA range: start 0x0 length 0x2000 00:31:18.355 Nvme3n1 : 5.88 141.44 8.84 0.00 0.00 776090.52 4493.90 962692.63 00:31:18.355 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:18.355 Verification LBA range: start 0x2000 length 0x2000 00:31:18.355 Nvme3n1 : 5.86 131.02 8.19 0.00 0.00 880998.07 11671.65 974676.36 00:31:18.355 [2024-11-20T13:52:15.678Z] =================================================================================================================== 00:31:18.355 [2024-11-20T13:52:15.678Z] Total : 1578.99 98.69 0.00 0.00 887370.18 3760.52 1198372.57 00:31:20.888 00:31:20.888 real 0m9.591s 00:31:20.888 user 0m17.834s 00:31:20.888 sys 0m0.385s 00:31:20.888 13:52:17 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.888 ************************************ 00:31:20.888 13:52:17 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:31:20.888 END TEST bdev_verify_big_io 00:31:20.888 ************************************ 00:31:20.888 13:52:17 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:20.888 13:52:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:31:20.888 13:52:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.888 13:52:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:31:20.888 ************************************ 00:31:20.888 START TEST bdev_write_zeroes 00:31:20.888 ************************************ 00:31:20.888 13:52:17 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:20.888 [2024-11-20 13:52:18.071955] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:20.888 [2024-11-20 13:52:18.072134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62162 ] 00:31:21.145 [2024-11-20 13:52:18.266052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.146 [2024-11-20 13:52:18.382114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.084 Running I/O for 1 seconds... 00:31:23.028 48704.00 IOPS, 190.25 MiB/s 00:31:23.028 Latency(us) 00:31:23.028 [2024-11-20T13:52:20.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.028 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:23.028 Nvme0n1 : 1.03 8111.30 31.68 0.00 0.00 15744.19 12607.88 25340.59 00:31:23.028 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:23.028 Nvme1n1 : 1.03 8101.88 31.65 0.00 0.00 15740.68 12795.12 26838.55 00:31:23.028 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:23.028 Nvme2n1 : 1.03 8092.47 31.61 0.00 0.00 15649.81 10548.18 21970.16 00:31:23.028 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:23.028 Nvme2n2 : 1.03 8083.43 31.58 0.00 0.00 15604.36 8301.23 21221.18 00:31:23.028 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:23.028 Nvme2n3 : 1.03 8073.84 31.54 0.00 0.00 15588.62 7521.04 22344.66 00:31:23.028 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:23.028 Nvme3n1 : 1.03 8002.40 31.26 0.00 0.00 15685.93 9362.29 24591.60 00:31:23.028 [2024-11-20T13:52:20.351Z] =================================================================================================================== 00:31:23.028 [2024-11-20T13:52:20.351Z] Total : 48465.32 189.32 0.00 0.00 15668.91 7521.04 26838.55 00:31:24.405 00:31:24.405 real 0m3.469s 00:31:24.405 user 0m3.042s 00:31:24.405 sys 0m0.305s 00:31:24.405 13:52:21 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:24.405 13:52:21 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:31:24.405 ************************************ 00:31:24.405 END TEST bdev_write_zeroes 00:31:24.405 ************************************ 00:31:24.405 13:52:21 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:24.405 13:52:21 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:31:24.405 13:52:21 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:24.405 13:52:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:31:24.405 ************************************ 00:31:24.405 START TEST bdev_json_nonenclosed 00:31:24.405 ************************************ 00:31:24.405 13:52:21 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:24.405 [2024-11-20 13:52:21.588299] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:24.405 [2024-11-20 13:52:21.588452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62215 ] 00:31:24.663 [2024-11-20 13:52:21.779109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.663 [2024-11-20 13:52:21.953319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.663 [2024-11-20 13:52:21.953450] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:31:24.663 [2024-11-20 13:52:21.953499] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:24.663 [2024-11-20 13:52:21.953520] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:25.227 00:31:25.227 real 0m0.841s 00:31:25.227 user 0m0.588s 00:31:25.227 sys 0m0.146s 00:31:25.227 13:52:22 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:25.227 13:52:22 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:31:25.227 ************************************ 00:31:25.227 END TEST bdev_json_nonenclosed 00:31:25.228 ************************************ 00:31:25.228 13:52:22 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:25.228 13:52:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:31:25.228 13:52:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:25.228 13:52:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:31:25.228 ************************************ 00:31:25.228 START TEST bdev_json_nonarray 00:31:25.228 ************************************ 00:31:25.228 13:52:22 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:25.228 [2024-11-20 13:52:22.462852] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:25.228 [2024-11-20 13:52:22.463046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62246 ] 00:31:25.485 [2024-11-20 13:52:22.650254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.485 [2024-11-20 13:52:22.786780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.485 [2024-11-20 13:52:22.786890] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:31:25.485 [2024-11-20 13:52:22.786915] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:25.485 [2024-11-20 13:52:22.786929] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:26.051 00:31:26.051 real 0m0.726s 00:31:26.051 user 0m0.478s 00:31:26.051 sys 0m0.142s 00:31:26.051 13:52:23 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:26.051 ************************************ 00:31:26.051 END TEST bdev_json_nonarray 00:31:26.051 13:52:23 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:31:26.051 ************************************ 00:31:26.051 13:52:23 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:31:26.051 13:52:23 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:31:26.051 13:52:23 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:31:26.051 13:52:23 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:31:26.051 13:52:23 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:31:26.051 13:52:23 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:31:26.051 13:52:23 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:26.051 13:52:23 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:31:26.051 13:52:23 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:31:26.051 13:52:23 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:31:26.051 13:52:23 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:31:26.051 ************************************ 00:31:26.051 END TEST blockdev_nvme 00:31:26.051 ************************************ 00:31:26.051 00:31:26.051 real 0m48.740s 00:31:26.051 user 1m12.459s 00:31:26.051 sys 0m8.808s 00:31:26.051 13:52:23 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:26.051 13:52:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:31:26.051 13:52:23 -- spdk/autotest.sh@209 -- # uname -s 00:31:26.051 13:52:23 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:31:26.051 13:52:23 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:31:26.051 13:52:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:26.051 13:52:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:26.051 13:52:23 -- common/autotest_common.sh@10 -- # set +x 00:31:26.051 ************************************ 00:31:26.051 START TEST blockdev_nvme_gpt 00:31:26.051 ************************************ 00:31:26.051 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:31:26.051 * Looking for test storage... 00:31:26.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:26.051 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:26.051 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:31:26.051 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:26.051 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:26.051 13:52:23 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:31:26.051 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:26.051 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:26.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.051 --rc genhtml_branch_coverage=1 00:31:26.051 --rc genhtml_function_coverage=1 00:31:26.052 --rc genhtml_legend=1 00:31:26.052 --rc geninfo_all_blocks=1 00:31:26.052 --rc geninfo_unexecuted_blocks=1 00:31:26.052 00:31:26.052 ' 00:31:26.052 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:26.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.052 --rc genhtml_branch_coverage=1 00:31:26.052 --rc genhtml_function_coverage=1 00:31:26.052 --rc genhtml_legend=1 00:31:26.052 --rc geninfo_all_blocks=1 00:31:26.052 --rc geninfo_unexecuted_blocks=1 00:31:26.052 00:31:26.052 ' 00:31:26.052 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:26.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.052 --rc genhtml_branch_coverage=1 00:31:26.052 --rc genhtml_function_coverage=1 00:31:26.052 --rc genhtml_legend=1 00:31:26.052 --rc geninfo_all_blocks=1 00:31:26.052 --rc geninfo_unexecuted_blocks=1 00:31:26.052 00:31:26.052 ' 00:31:26.052 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:26.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.052 --rc genhtml_branch_coverage=1 00:31:26.052 --rc genhtml_function_coverage=1 00:31:26.052 --rc genhtml_legend=1 00:31:26.052 --rc geninfo_all_blocks=1 00:31:26.052 --rc geninfo_unexecuted_blocks=1 00:31:26.052 00:31:26.052 ' 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62330 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:26.052 13:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62330 00:31:26.052 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62330 ']' 00:31:26.052 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.052 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:26.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.052 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.052 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:26.052 13:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:31:26.309 [2024-11-20 13:52:23.494296] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:26.309 [2024-11-20 13:52:23.494570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62330 ] 00:31:26.567 [2024-11-20 13:52:23.699076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.567 [2024-11-20 13:52:23.885943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.940 13:52:24 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.940 13:52:24 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:31:27.940 13:52:24 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:31:27.940 13:52:24 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:31:27.940 13:52:24 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:28.197 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:28.197 Waiting for block devices as requested 00:31:28.455 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:28.456 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:28.456 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:28.770 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:34.051 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:34.051 13:52:30 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:34.051 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:31:34.052 13:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:34.052 13:52:30 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:31:34.052 13:52:30 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:31:34.052 13:52:30 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:31:34.052 13:52:30 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:31:34.052 13:52:30 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:31:34.052 13:52:30 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:31:34.052 13:52:30 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:31:34.052 13:52:31 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:31:34.052 BYT; 00:31:34.052 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:31:34.052 13:52:31 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:31:34.052 BYT; 00:31:34.052 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:31:34.052 13:52:31 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:31:34.052 13:52:31 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:31:34.052 13:52:31 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:31:34.052 13:52:31 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:31:34.052 13:52:31 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:31:34.052 13:52:31 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:31:34.052 13:52:31 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:31:34.052 13:52:31 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:31:34.052 13:52:31 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:31:34.052 13:52:31 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:31:34.052 13:52:31 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:31:34.052 13:52:31 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:31:35.081 The operation has completed successfully. 00:31:35.081 13:52:32 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:31:36.020 The operation has completed successfully. 00:31:36.020 13:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:36.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:37.156 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:37.156 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:31:37.156 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:37.156 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:31:37.415 13:52:34 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:31:37.415 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.415 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:31:37.415 [] 00:31:37.415 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.415 13:52:34 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:31:37.415 13:52:34 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:31:37.415 13:52:34 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:31:37.415 13:52:34 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:37.415 13:52:34 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:31:37.415 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.415 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:31:37.674 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.674 13:52:34 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:31:37.674 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.674 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:31:37.674 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.674 13:52:34 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:31:37.674 13:52:34 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:31:37.674 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.674 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:31:37.674 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.674 13:52:34 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:31:37.674 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.674 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:31:37.674 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.674 13:52:34 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:37.674 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.674 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:31:37.934 13:52:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.934 13:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:31:37.934 13:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:31:37.934 13:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:31:37.934 13:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.934 13:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:31:37.934 13:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.934 13:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:31:37.934 13:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:31:37.935 13:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "1ceb3b9f-2218-4ed1-89cb-1c590bde3816"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1ceb3b9f-2218-4ed1-89cb-1c590bde3816",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8597e515-79c1-47a4-91f9-4c0cf5eb726e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8597e515-79c1-47a4-91f9-4c0cf5eb726e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "34de8a94-3e4f-44f3-bce5-e126db498834"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "34de8a94-3e4f-44f3-bce5-e126db498834",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "229a38e4-7f5d-415b-84e3-0cd49016b5bd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "229a38e4-7f5d-415b-84e3-0cd49016b5bd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "be1ecce7-957e-41e3-9a7a-0ff9eb419c63"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "be1ecce7-957e-41e3-9a7a-0ff9eb419c63",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:31:37.935 13:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:31:37.935 13:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:31:37.935 13:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:31:37.935 13:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62330 00:31:37.935 13:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62330 ']' 00:31:37.935 13:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62330 00:31:37.935 13:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:31:37.935 13:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.935 13:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62330 00:31:37.935 13:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:37.935 13:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:37.935 killing process with pid 62330 00:31:37.935 13:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62330' 00:31:37.935 13:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62330 00:31:37.935 13:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62330 00:31:40.486 13:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:40.486 13:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:31:40.486 13:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:31:40.486 13:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.486 13:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:31:40.486 ************************************ 00:31:40.486 START TEST bdev_hello_world 00:31:40.486 ************************************ 00:31:40.486 13:52:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:31:40.771 [2024-11-20 13:52:37.907307] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:40.771 [2024-11-20 13:52:37.907507] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62974 ] 00:31:41.030 [2024-11-20 13:52:38.099636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.030 [2024-11-20 13:52:38.222354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.597 [2024-11-20 13:52:38.887313] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:41.597 [2024-11-20 13:52:38.887380] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:31:41.597 [2024-11-20 13:52:38.887413] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:41.597 [2024-11-20 13:52:38.890719] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:41.597 [2024-11-20 13:52:38.891284] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:41.597 [2024-11-20 13:52:38.891324] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:41.597 [2024-11-20 13:52:38.891544] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:41.597 00:31:41.597 [2024-11-20 13:52:38.891591] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:42.974 00:31:42.974 real 0m2.337s 00:31:42.974 user 0m1.961s 00:31:42.974 sys 0m0.267s 00:31:42.974 13:52:40 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:42.974 13:52:40 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:31:42.974 ************************************ 00:31:42.974 END TEST bdev_hello_world 00:31:42.974 ************************************ 00:31:42.974 13:52:40 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:31:42.974 13:52:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:42.974 13:52:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:42.974 13:52:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:31:42.974 ************************************ 00:31:42.974 START TEST bdev_bounds 00:31:42.974 ************************************ 00:31:42.974 13:52:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:31:42.974 13:52:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:42.974 13:52:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63022 00:31:42.974 13:52:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:42.974 Process bdevio pid: 63022 00:31:42.975 13:52:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63022' 00:31:42.975 13:52:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63022 00:31:42.975 13:52:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63022 ']' 00:31:42.975 13:52:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.975 13:52:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:42.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.975 13:52:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.975 13:52:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:42.975 13:52:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:43.233 [2024-11-20 13:52:40.301201] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:43.233 [2024-11-20 13:52:40.301405] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63022 ] 00:31:43.233 [2024-11-20 13:52:40.508162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:43.492 [2024-11-20 13:52:40.684542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.492 [2024-11-20 13:52:40.684625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.492 [2024-11-20 13:52:40.684631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:44.429 13:52:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:44.429 13:52:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:31:44.429 13:52:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:44.429 I/O targets: 00:31:44.429 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:31:44.429 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:31:44.429 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:31:44.429 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:31:44.429 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:31:44.429 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:31:44.429 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:31:44.429 00:31:44.429 00:31:44.429 CUnit - A unit testing framework for C - Version 2.1-3 00:31:44.429 http://cunit.sourceforge.net/ 00:31:44.429 00:31:44.429 00:31:44.429 Suite: bdevio tests on: Nvme3n1 00:31:44.429 Test: blockdev write read block ...passed 00:31:44.429 Test: blockdev write zeroes read block ...passed 00:31:44.429 Test: blockdev write zeroes read no split ...passed 00:31:44.429 Test: blockdev write zeroes read split ...passed 00:31:44.429 Test: blockdev write zeroes read split partial ...passed 00:31:44.429 Test: blockdev reset ...[2024-11-20 13:52:41.662173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:31:44.429 [2024-11-20 13:52:41.666622] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:31:44.429 passed 00:31:44.429 Test: blockdev write read 8 blocks ...passed 00:31:44.429 Test: blockdev write read size > 128k ...passed 00:31:44.429 Test: blockdev write read invalid size ...passed 00:31:44.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:44.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:44.429 Test: blockdev write read max offset ...passed 00:31:44.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:44.429 Test: blockdev writev readv 8 blocks ...passed 00:31:44.429 Test: blockdev writev readv 30 x 1block ...passed 00:31:44.429 Test: blockdev writev readv block ...passed 00:31:44.429 Test: blockdev writev readv size > 128k ...passed 00:31:44.429 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:44.429 Test: blockdev comparev and writev ...[2024-11-20 13:52:41.676205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3804000 len:0x1000 00:31:44.429 [2024-11-20 13:52:41.676289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:31:44.429 passed 00:31:44.429 Test: blockdev nvme passthru rw ...passed 00:31:44.429 Test: blockdev nvme passthru vendor specific ...passed 00:31:44.430 Test: blockdev nvme admin passthru ...[2024-11-20 13:52:41.677015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:31:44.430 [2024-11-20 13:52:41.677059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:31:44.430 passed 00:31:44.430 Test: blockdev copy ...passed 00:31:44.430 Suite: bdevio tests on: Nvme2n3 00:31:44.430 Test: blockdev write read block ...passed 00:31:44.430 Test: blockdev write zeroes read block ...passed 00:31:44.430 Test: blockdev write zeroes read no split ...passed 00:31:44.430 Test: blockdev write zeroes read split ...passed 00:31:44.689 Test: blockdev write zeroes read split partial ...passed 00:31:44.689 Test: blockdev reset ...[2024-11-20 13:52:41.790352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:31:44.689 [2024-11-20 13:52:41.795217] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:31:44.689 passed 00:31:44.689 Test: blockdev write read 8 blocks ...passed 00:31:44.689 Test: blockdev write read size > 128k ...passed 00:31:44.689 Test: blockdev write read invalid size ...passed 00:31:44.689 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:44.689 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:44.689 Test: blockdev write read max offset ...passed 00:31:44.689 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:44.689 Test: blockdev writev readv 8 blocks ...passed 00:31:44.689 Test: blockdev writev readv 30 x 1block ...passed 00:31:44.689 Test: blockdev writev readv block ...passed 00:31:44.689 Test: blockdev writev readv size > 128k ...passed 00:31:44.689 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:44.689 Test: blockdev comparev and writev ...[2024-11-20 13:52:41.804704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3802000 len:0x1000 00:31:44.689 [2024-11-20 13:52:41.804764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:31:44.690 passed 00:31:44.690 Test: blockdev nvme passthru rw ...passed 00:31:44.690 Test: blockdev nvme passthru vendor specific ...passed 00:31:44.690 Test: blockdev nvme admin passthru ...[2024-11-20 13:52:41.805542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:31:44.690 [2024-11-20 13:52:41.805601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:31:44.690 passed 00:31:44.690 Test: blockdev copy ...passed 00:31:44.690 Suite: bdevio tests on: Nvme2n2 00:31:44.690 Test: blockdev write read block ...passed 00:31:44.690 Test: blockdev write zeroes read block ...passed 00:31:44.690 Test: blockdev write zeroes read no split ...passed 00:31:44.690 Test: blockdev write zeroes read split ...passed 00:31:44.690 Test: blockdev write zeroes read split partial ...passed 00:31:44.690 Test: blockdev reset ...[2024-11-20 13:52:41.920289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:31:44.690 [2024-11-20 13:52:41.925171] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:31:44.690 passed 00:31:44.690 Test: blockdev write read 8 blocks ...passed 00:31:44.690 Test: blockdev write read size > 128k ...passed 00:31:44.690 Test: blockdev write read invalid size ...passed 00:31:44.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:44.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:44.690 Test: blockdev write read max offset ...passed 00:31:44.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:44.690 Test: blockdev writev readv 8 blocks ...passed 00:31:44.690 Test: blockdev writev readv 30 x 1block ...passed 00:31:44.690 Test: blockdev writev readv block ...passed 00:31:44.690 Test: blockdev writev readv size > 128k ...passed 00:31:44.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:44.690 Test: blockdev comparev and writev ...[2024-11-20 13:52:41.934520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7e38000 len:0x1000 00:31:44.690 [2024-11-20 13:52:41.934572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:31:44.690 passed 00:31:44.690 Test: blockdev nvme passthru rw ...passed 00:31:44.690 Test: blockdev nvme passthru vendor specific ...passed 00:31:44.690 Test: blockdev nvme admin passthru ...[2024-11-20 13:52:41.935429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:31:44.690 [2024-11-20 13:52:41.935463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:31:44.690 passed 00:31:44.690 Test: blockdev copy ...passed 00:31:44.690 Suite: bdevio tests on: Nvme2n1 00:31:44.690 Test: blockdev write read block ...passed 00:31:44.690 Test: blockdev write zeroes read block ...passed 00:31:44.690 Test: blockdev write zeroes read no split ...passed 00:31:44.690 Test: blockdev write zeroes read split ...passed 00:31:44.949 Test: blockdev write zeroes read split partial ...passed 00:31:44.949 Test: blockdev reset ...[2024-11-20 13:52:42.042435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:31:44.949 [2024-11-20 13:52:42.046926] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:31:44.949 passed 00:31:44.949 Test: blockdev write read 8 blocks ...passed 00:31:44.949 Test: blockdev write read size > 128k ...passed 00:31:44.950 Test: blockdev write read invalid size ...passed 00:31:44.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:44.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:44.950 Test: blockdev write read max offset ...passed 00:31:44.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:44.950 Test: blockdev writev readv 8 blocks ...passed 00:31:44.950 Test: blockdev writev readv 30 x 1block ...passed 00:31:44.950 Test: blockdev writev readv block ...passed 00:31:44.950 Test: blockdev writev readv size > 128k ...passed 00:31:44.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:44.950 Test: blockdev comparev and writev ...[2024-11-20 13:52:42.056202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7e34000 len:0x1000 00:31:44.950 [2024-11-20 13:52:42.056290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:31:44.950 passed 00:31:44.950 Test: blockdev nvme passthru rw ...passed 00:31:44.950 Test: blockdev nvme passthru vendor specific ...passed 00:31:44.950 Test: blockdev nvme admin passthru ...[2024-11-20 13:52:42.057136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:31:44.950 [2024-11-20 13:52:42.057176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:31:44.950 passed 00:31:44.950 Test: blockdev copy ...passed 00:31:44.950 Suite: bdevio tests on: Nvme1n1p2 00:31:44.950 Test: blockdev write read block ...passed 00:31:44.950 Test: blockdev write zeroes read block ...passed 00:31:44.950 Test: blockdev write zeroes read no split ...passed 00:31:44.950 Test: blockdev write zeroes read split ...passed 00:31:44.950 Test: blockdev write zeroes read split partial ...passed 00:31:44.950 Test: blockdev reset ...[2024-11-20 13:52:42.163154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:31:44.950 [2024-11-20 13:52:42.167627] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:31:44.950 passed 00:31:44.950 Test: blockdev write read 8 blocks ...passed 00:31:44.950 Test: blockdev write read size > 128k ...passed 00:31:44.950 Test: blockdev write read invalid size ...passed 00:31:44.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:44.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:44.950 Test: blockdev write read max offset ...passed 00:31:44.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:44.950 Test: blockdev writev readv 8 blocks ...passed 00:31:44.950 Test: blockdev writev readv 30 x 1block ...passed 00:31:44.950 Test: blockdev writev readv block ...passed 00:31:44.950 Test: blockdev writev readv size > 128k ...passed 00:31:44.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:44.950 Test: blockdev comparev and writev ...[2024-11-20 13:52:42.179390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c7e30000 len:0x1000 00:31:44.950 [2024-11-20 13:52:42.179445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:31:44.950 passed 00:31:44.950 Test: blockdev nvme passthru rw ...passed 00:31:44.950 Test: blockdev nvme passthru vendor specific ...passed 00:31:44.950 Test: blockdev nvme admin passthru ...passed 00:31:44.950 Test: blockdev copy ...passed 00:31:44.950 Suite: bdevio tests on: Nvme1n1p1 00:31:44.950 Test: blockdev write read block ...passed 00:31:44.950 Test: blockdev write zeroes read block ...passed 00:31:44.950 Test: blockdev write zeroes read no split ...passed 00:31:44.950 Test: blockdev write zeroes read split ...passed 00:31:45.275 Test: blockdev write zeroes read split partial ...passed 00:31:45.275 Test: blockdev reset ...[2024-11-20 13:52:42.280028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:31:45.275 [2024-11-20 13:52:42.284247] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:31:45.275 passed 00:31:45.275 Test: blockdev write read 8 blocks ...passed 00:31:45.275 Test: blockdev write read size > 128k ...passed 00:31:45.275 Test: blockdev write read invalid size ...passed 00:31:45.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:45.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:45.275 Test: blockdev write read max offset ...passed 00:31:45.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:45.276 Test: blockdev writev readv 8 blocks ...passed 00:31:45.276 Test: blockdev writev readv 30 x 1block ...passed 00:31:45.276 Test: blockdev writev readv block ...passed 00:31:45.276 Test: blockdev writev readv size > 128k ...passed 00:31:45.276 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:45.276 Test: blockdev comparev and writev ...[2024-11-20 13:52:42.293667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b420e000 len:0x1000 00:31:45.276 [2024-11-20 13:52:42.293722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:31:45.276 passed 00:31:45.276 Test: blockdev nvme passthru rw ...passed 00:31:45.276 Test: blockdev nvme passthru vendor specific ...passed 00:31:45.276 Test: blockdev nvme admin passthru ...passed 00:31:45.276 Test: blockdev copy ...passed 00:31:45.276 Suite: bdevio tests on: Nvme0n1 00:31:45.276 Test: blockdev write read block ...passed 00:31:45.276 Test: blockdev write zeroes read block ...passed 00:31:45.276 Test: blockdev write zeroes read no split ...passed 00:31:45.276 Test: blockdev write zeroes read split ...passed 00:31:45.276 Test: blockdev write zeroes read split partial ...passed 00:31:45.276 Test: blockdev reset ...[2024-11-20 13:52:42.392934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:31:45.276 [2024-11-20 13:52:42.397264] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:31:45.276 passed 00:31:45.276 Test: blockdev write read 8 blocks ...passed 00:31:45.276 Test: blockdev write read size > 128k ...passed 00:31:45.276 Test: blockdev write read invalid size ...passed 00:31:45.276 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:45.276 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:45.276 Test: blockdev write read max offset ...passed 00:31:45.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:45.276 Test: blockdev writev readv 8 blocks ...passed 00:31:45.276 Test: blockdev writev readv 30 x 1block ...passed 00:31:45.276 Test: blockdev writev readv block ...passed 00:31:45.276 Test: blockdev writev readv size > 128k ...passed 00:31:45.276 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:45.276 Test: blockdev comparev and writev ...[2024-11-20 13:52:42.404820] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:31:45.276 separate metadata which is not supported yet. 00:31:45.276 passed 00:31:45.276 Test: blockdev nvme passthru rw ...passed 00:31:45.276 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:52:42.405414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:31:45.276 [2024-11-20 13:52:42.405462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:31:45.276 passed 00:31:45.276 Test: blockdev nvme admin passthru ...passed 00:31:45.276 Test: blockdev copy ...passed 00:31:45.276 00:31:45.276 Run Summary: Type Total Ran Passed Failed Inactive 00:31:45.276 suites 7 7 n/a 0 0 00:31:45.276 tests 161 161 161 0 0 00:31:45.276 asserts 1025 1025 1025 0 n/a 00:31:45.276 00:31:45.276 Elapsed time = 2.412 seconds 00:31:45.276 0 00:31:45.276 13:52:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63022 00:31:45.276 13:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63022 ']' 00:31:45.276 13:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63022 00:31:45.276 13:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:31:45.276 13:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.276 13:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63022 00:31:45.276 13:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:45.276 13:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:45.276 killing process with pid 63022 00:31:45.276 13:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63022' 00:31:45.276 13:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63022 00:31:45.276 13:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63022 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:31:46.653 00:31:46.653 real 0m3.423s 00:31:46.653 user 0m8.782s 00:31:46.653 sys 0m0.451s 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:46.653 ************************************ 00:31:46.653 END TEST bdev_bounds 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:46.653 ************************************ 00:31:46.653 13:52:43 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:31:46.653 13:52:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:46.653 13:52:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:46.653 13:52:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:31:46.653 ************************************ 00:31:46.653 START TEST bdev_nbd 00:31:46.653 ************************************ 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63092 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63092 /var/tmp/spdk-nbd.sock 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63092 ']' 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:46.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:46.653 13:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:31:46.653 [2024-11-20 13:52:43.795011] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:46.653 [2024-11-20 13:52:43.795189] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.911 [2024-11-20 13:52:43.991749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.911 [2024-11-20 13:52:44.119335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:31:47.848 13:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:47.848 1+0 records in 00:31:47.848 1+0 records out 00:31:47.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440098 s, 9.3 MB/s 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:31:47.848 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:31:48.107 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:31:48.107 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:31:48.107 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:31:48.107 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:31:48.107 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:31:48.107 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:48.107 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:48.107 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:31:48.107 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:31:48.107 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:48.107 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:48.107 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:48.107 1+0 records in 00:31:48.107 1+0 records out 00:31:48.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637143 s, 6.4 MB/s 00:31:48.107 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:48.366 1+0 records in 00:31:48.366 1+0 records out 00:31:48.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551396 s, 7.4 MB/s 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:31:48.366 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:31:48.625 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:31:48.625 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:31:48.884 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:31:48.884 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:31:48.884 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:31:48.884 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:48.884 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:48.884 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:31:48.884 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:31:48.884 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:48.884 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:48.884 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:48.884 1+0 records in 00:31:48.884 1+0 records out 00:31:48.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694244 s, 5.9 MB/s 00:31:48.885 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.885 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:31:48.885 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.885 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:48.885 13:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:31:48.885 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:48.885 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:31:48.885 13:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:49.144 1+0 records in 00:31:49.144 1+0 records out 00:31:49.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581693 s, 7.0 MB/s 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:31:49.144 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:31:49.402 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:31:49.402 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:31:49.402 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:31:49.402 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:31:49.402 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:31:49.402 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:49.402 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:49.403 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:31:49.403 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:31:49.403 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:49.403 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:49.403 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:49.403 1+0 records in 00:31:49.403 1+0 records out 00:31:49.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663843 s, 6.2 MB/s 00:31:49.403 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.403 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:31:49.403 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.403 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:49.403 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:31:49.403 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:49.403 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:31:49.403 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:49.661 1+0 records in 00:31:49.661 1+0 records out 00:31:49.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000886985 s, 4.6 MB/s 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:31:49.661 13:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:49.937 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:49.937 { 00:31:49.937 "nbd_device": "/dev/nbd0", 00:31:49.937 "bdev_name": "Nvme0n1" 00:31:49.937 }, 00:31:49.937 { 00:31:49.937 "nbd_device": "/dev/nbd1", 00:31:49.937 "bdev_name": "Nvme1n1p1" 00:31:49.937 }, 00:31:49.937 { 00:31:49.937 "nbd_device": "/dev/nbd2", 00:31:49.937 "bdev_name": "Nvme1n1p2" 00:31:49.937 }, 00:31:49.937 { 00:31:49.937 "nbd_device": "/dev/nbd3", 00:31:49.937 "bdev_name": "Nvme2n1" 00:31:49.937 }, 00:31:49.937 { 00:31:49.937 "nbd_device": "/dev/nbd4", 00:31:49.937 "bdev_name": "Nvme2n2" 00:31:49.937 }, 00:31:49.937 { 00:31:49.937 "nbd_device": "/dev/nbd5", 00:31:49.937 "bdev_name": "Nvme2n3" 00:31:49.937 }, 00:31:49.937 { 00:31:49.937 "nbd_device": "/dev/nbd6", 00:31:49.937 "bdev_name": "Nvme3n1" 00:31:49.937 } 00:31:49.937 ]' 00:31:49.937 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:49.937 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:49.937 { 00:31:49.937 "nbd_device": "/dev/nbd0", 00:31:49.937 "bdev_name": "Nvme0n1" 00:31:49.937 }, 00:31:49.937 { 00:31:49.937 "nbd_device": "/dev/nbd1", 00:31:49.937 "bdev_name": "Nvme1n1p1" 00:31:49.937 }, 00:31:49.937 { 00:31:49.937 "nbd_device": "/dev/nbd2", 00:31:49.937 "bdev_name": "Nvme1n1p2" 00:31:49.937 }, 00:31:49.937 { 00:31:49.937 "nbd_device": "/dev/nbd3", 00:31:49.937 "bdev_name": "Nvme2n1" 00:31:49.937 }, 00:31:49.937 { 00:31:49.937 "nbd_device": "/dev/nbd4", 00:31:49.937 "bdev_name": "Nvme2n2" 00:31:49.937 }, 00:31:49.937 { 00:31:49.937 "nbd_device": "/dev/nbd5", 00:31:49.937 "bdev_name": "Nvme2n3" 00:31:49.937 }, 00:31:49.937 { 00:31:49.937 "nbd_device": "/dev/nbd6", 00:31:49.937 "bdev_name": "Nvme3n1" 00:31:49.937 } 00:31:49.937 ]' 00:31:49.937 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:49.937 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:31:49.937 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:49.937 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:31:49.937 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:49.937 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:49.937 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:49.937 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:50.195 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:50.195 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:50.195 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:50.195 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:50.195 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:50.195 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:50.195 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:50.195 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:50.195 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:50.195 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:50.454 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:50.454 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:50.454 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:50.454 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:50.454 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:50.454 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:50.454 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:50.454 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:50.454 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:50.454 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:31:50.713 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:31:50.713 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:31:50.713 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:31:50.713 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:50.713 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:50.713 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:31:50.713 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:50.713 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:50.713 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:50.713 13:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:31:50.971 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:31:50.971 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:31:50.971 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:31:50.971 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:50.971 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:50.971 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:31:50.971 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:50.971 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:50.971 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:50.971 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:31:51.229 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:31:51.229 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:31:51.229 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:31:51.229 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:51.229 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:51.229 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:31:51.229 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:51.229 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:51.229 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:51.229 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:31:51.487 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:31:51.487 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:31:51.487 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:31:51.487 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:51.487 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:51.487 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:31:51.743 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:51.743 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:51.743 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:51.743 13:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:31:52.000 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:31:52.000 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:31:52.000 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:31:52.000 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:52.000 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:52.000 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:31:52.000 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:52.000 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:52.000 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:52.000 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:52.000 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:31:52.258 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:31:52.517 /dev/nbd0 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:52.517 1+0 records in 00:31:52.517 1+0 records out 00:31:52.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519236 s, 7.9 MB/s 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:31:52.517 13:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:31:52.778 /dev/nbd1 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:52.778 1+0 records in 00:31:52.778 1+0 records out 00:31:52.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552478 s, 7.4 MB/s 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:31:52.778 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:31:53.044 /dev/nbd10 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:53.044 1+0 records in 00:31:53.044 1+0 records out 00:31:53.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520577 s, 7.9 MB/s 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:31:53.044 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:31:53.612 /dev/nbd11 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:53.612 1+0 records in 00:31:53.612 1+0 records out 00:31:53.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544427 s, 7.5 MB/s 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:31:53.612 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:31:53.871 /dev/nbd12 00:31:53.871 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:31:53.871 13:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:31:53.871 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:31:53.871 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:31:53.871 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:53.871 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:53.871 13:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:31:53.871 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:31:53.871 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:53.871 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:53.871 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:53.871 1+0 records in 00:31:53.871 1+0 records out 00:31:53.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745569 s, 5.5 MB/s 00:31:53.871 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:53.871 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:31:53.871 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:53.871 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:53.871 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:31:53.871 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:53.871 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:31:53.871 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:31:54.130 /dev/nbd13 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:54.130 1+0 records in 00:31:54.130 1+0 records out 00:31:54.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000814943 s, 5.0 MB/s 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:31:54.130 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:31:54.389 /dev/nbd14 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:54.389 1+0 records in 00:31:54.389 1+0 records out 00:31:54.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00311973 s, 1.3 MB/s 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:54.389 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:54.649 { 00:31:54.649 "nbd_device": "/dev/nbd0", 00:31:54.649 "bdev_name": "Nvme0n1" 00:31:54.649 }, 00:31:54.649 { 00:31:54.649 "nbd_device": "/dev/nbd1", 00:31:54.649 "bdev_name": "Nvme1n1p1" 00:31:54.649 }, 00:31:54.649 { 00:31:54.649 "nbd_device": "/dev/nbd10", 00:31:54.649 "bdev_name": "Nvme1n1p2" 00:31:54.649 }, 00:31:54.649 { 00:31:54.649 "nbd_device": "/dev/nbd11", 00:31:54.649 "bdev_name": "Nvme2n1" 00:31:54.649 }, 00:31:54.649 { 00:31:54.649 "nbd_device": "/dev/nbd12", 00:31:54.649 "bdev_name": "Nvme2n2" 00:31:54.649 }, 00:31:54.649 { 00:31:54.649 "nbd_device": "/dev/nbd13", 00:31:54.649 "bdev_name": "Nvme2n3" 00:31:54.649 }, 00:31:54.649 { 00:31:54.649 "nbd_device": "/dev/nbd14", 00:31:54.649 "bdev_name": "Nvme3n1" 00:31:54.649 } 00:31:54.649 ]' 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:54.649 { 00:31:54.649 "nbd_device": "/dev/nbd0", 00:31:54.649 "bdev_name": "Nvme0n1" 00:31:54.649 }, 00:31:54.649 { 00:31:54.649 "nbd_device": "/dev/nbd1", 00:31:54.649 "bdev_name": "Nvme1n1p1" 00:31:54.649 }, 00:31:54.649 { 00:31:54.649 "nbd_device": "/dev/nbd10", 00:31:54.649 "bdev_name": "Nvme1n1p2" 00:31:54.649 }, 00:31:54.649 { 00:31:54.649 "nbd_device": "/dev/nbd11", 00:31:54.649 "bdev_name": "Nvme2n1" 00:31:54.649 }, 00:31:54.649 { 00:31:54.649 "nbd_device": "/dev/nbd12", 00:31:54.649 "bdev_name": "Nvme2n2" 00:31:54.649 }, 00:31:54.649 { 00:31:54.649 "nbd_device": "/dev/nbd13", 00:31:54.649 "bdev_name": "Nvme2n3" 00:31:54.649 }, 00:31:54.649 { 00:31:54.649 "nbd_device": "/dev/nbd14", 00:31:54.649 "bdev_name": "Nvme3n1" 00:31:54.649 } 00:31:54.649 ]' 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:54.649 /dev/nbd1 00:31:54.649 /dev/nbd10 00:31:54.649 /dev/nbd11 00:31:54.649 /dev/nbd12 00:31:54.649 /dev/nbd13 00:31:54.649 /dev/nbd14' 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:54.649 /dev/nbd1 00:31:54.649 /dev/nbd10 00:31:54.649 /dev/nbd11 00:31:54.649 /dev/nbd12 00:31:54.649 /dev/nbd13 00:31:54.649 /dev/nbd14' 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:54.649 256+0 records in 00:31:54.649 256+0 records out 00:31:54.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.007975 s, 131 MB/s 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:54.649 13:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:54.908 256+0 records in 00:31:54.908 256+0 records out 00:31:54.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141177 s, 7.4 MB/s 00:31:54.908 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:54.908 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:54.908 256+0 records in 00:31:54.908 256+0 records out 00:31:54.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145654 s, 7.2 MB/s 00:31:54.908 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:54.908 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:31:55.168 256+0 records in 00:31:55.168 256+0 records out 00:31:55.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155672 s, 6.7 MB/s 00:31:55.168 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:55.168 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:31:55.168 256+0 records in 00:31:55.168 256+0 records out 00:31:55.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147793 s, 7.1 MB/s 00:31:55.168 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:55.168 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:31:55.427 256+0 records in 00:31:55.427 256+0 records out 00:31:55.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149633 s, 7.0 MB/s 00:31:55.427 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:55.427 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:31:55.685 256+0 records in 00:31:55.685 256+0 records out 00:31:55.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143297 s, 7.3 MB/s 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:31:55.685 256+0 records in 00:31:55.685 256+0 records out 00:31:55.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14712 s, 7.1 MB/s 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:31:55.685 13:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:55.685 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:31:55.945 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:55.945 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:31:55.945 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:55.945 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:31:55.945 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:55.945 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:31:55.945 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:55.945 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:55.945 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:55.945 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:56.205 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:56.205 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:56.205 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:56.205 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:56.205 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:56.205 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:56.205 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:56.205 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:56.205 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:56.205 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:56.464 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:56.464 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:56.464 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:56.464 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:56.464 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:56.464 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:56.464 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:56.464 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:56.464 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:56.464 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:31:56.723 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:31:56.723 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:31:56.723 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:31:56.723 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:56.723 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:56.723 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:31:56.723 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:56.723 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:56.723 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:56.723 13:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:31:56.981 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:31:56.981 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:31:56.981 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:31:56.981 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:56.981 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:56.981 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:31:56.981 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:56.981 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:56.981 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:56.981 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:57.240 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:57.499 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:57.758 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:57.758 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:57.758 13:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:57.758 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:57.758 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:57.758 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:57.758 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:57.758 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:57.758 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:57.758 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:31:57.758 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:57.758 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:31:57.758 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:57.758 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:57.758 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:31:57.758 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:58.016 malloc_lvol_verify 00:31:58.016 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:58.275 0503a410-113b-4b1b-b6fd-5e2bbcdbc22f 00:31:58.275 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:58.534 03a89db6-83f1-4916-9ca3-5979293aff16 00:31:58.534 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:58.793 /dev/nbd0 00:31:58.793 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:31:58.793 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:31:58.793 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:31:58.793 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:31:58.793 13:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:31:58.793 mke2fs 1.47.0 (5-Feb-2023) 00:31:58.793 Discarding device blocks: 0/4096 done 00:31:58.793 Creating filesystem with 4096 1k blocks and 1024 inodes 00:31:58.793 00:31:58.793 Allocating group tables: 0/1 done 00:31:58.793 Writing inode tables: 0/1 done 00:31:58.793 Creating journal (1024 blocks): done 00:31:58.793 Writing superblocks and filesystem accounting information: 0/1 done 00:31:58.793 00:31:58.793 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:58.793 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:58.793 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:58.793 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:58.793 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:58.793 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:58.793 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:59.051 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:59.051 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:59.051 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:59.051 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:59.051 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:59.051 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:59.051 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:59.052 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:59.052 13:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63092 00:31:59.052 13:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63092 ']' 00:31:59.052 13:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63092 00:31:59.052 13:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:31:59.052 13:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:59.052 13:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63092 00:31:59.052 13:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:59.052 13:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:59.052 13:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63092' 00:31:59.052 killing process with pid 63092 00:31:59.052 13:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63092 00:31:59.052 13:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63092 00:32:00.430 13:52:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:32:00.430 00:32:00.430 real 0m13.886s 00:32:00.430 user 0m18.403s 00:32:00.430 sys 0m5.699s 00:32:00.430 13:52:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.430 13:52:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:00.430 ************************************ 00:32:00.430 END TEST bdev_nbd 00:32:00.430 ************************************ 00:32:00.430 13:52:57 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:32:00.430 13:52:57 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:32:00.430 13:52:57 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:32:00.430 skipping fio tests on NVMe due to multi-ns failures. 00:32:00.430 13:52:57 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:32:00.430 13:52:57 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:00.430 13:52:57 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:00.430 13:52:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:32:00.430 13:52:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.430 13:52:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:32:00.430 ************************************ 00:32:00.430 START TEST bdev_verify 00:32:00.430 ************************************ 00:32:00.430 13:52:57 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:00.430 [2024-11-20 13:52:57.709113] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:00.430 [2024-11-20 13:52:57.709257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63532 ] 00:32:00.689 [2024-11-20 13:52:57.884439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:00.948 [2024-11-20 13:52:58.007871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.948 [2024-11-20 13:52:58.007918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.517 Running I/O for 5 seconds... 00:32:03.875 17472.00 IOPS, 68.25 MiB/s [2024-11-20T13:53:02.135Z] 17952.00 IOPS, 70.12 MiB/s [2024-11-20T13:53:03.073Z] 18688.00 IOPS, 73.00 MiB/s [2024-11-20T13:53:04.008Z] 18608.00 IOPS, 72.69 MiB/s [2024-11-20T13:53:04.008Z] 18291.20 IOPS, 71.45 MiB/s 00:32:06.685 Latency(us) 00:32:06.685 [2024-11-20T13:53:04.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.685 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:06.685 Verification LBA range: start 0x0 length 0xbd0bd 00:32:06.685 Nvme0n1 : 5.06 1290.88 5.04 0.00 0.00 98699.21 21845.33 88879.30 00:32:06.685 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:06.685 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:32:06.685 Nvme0n1 : 5.10 1280.66 5.00 0.00 0.00 99240.10 27587.54 83386.76 00:32:06.685 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:06.685 Verification LBA range: start 0x0 length 0x4ff80 00:32:06.685 Nvme1n1p1 : 5.06 1290.33 5.04 0.00 0.00 98565.56 24716.43 84884.72 00:32:06.685 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:06.685 Verification LBA range: start 0x4ff80 length 0x4ff80 00:32:06.685 Nvme1n1p1 : 5.10 1280.17 5.00 0.00 0.00 99128.97 28835.84 80890.15 00:32:06.685 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:06.685 Verification LBA range: start 0x0 length 0x4ff7f 00:32:06.685 Nvme1n1p2 : 5.09 1295.93 5.06 0.00 0.00 98048.92 10922.67 84385.40 00:32:06.685 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:06.685 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:32:06.685 Nvme1n1p2 : 5.10 1279.69 5.00 0.00 0.00 98996.95 24716.43 80390.83 00:32:06.685 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:06.685 Verification LBA range: start 0x0 length 0x80000 00:32:06.685 Nvme2n1 : 5.09 1295.38 5.06 0.00 0.00 97919.75 10423.34 81888.79 00:32:06.685 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:06.685 Verification LBA range: start 0x80000 length 0x80000 00:32:06.685 Nvme2n1 : 5.10 1279.14 5.00 0.00 0.00 98857.04 16976.94 83886.08 00:32:06.685 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:06.685 Verification LBA range: start 0x0 length 0x80000 00:32:06.685 Nvme2n2 : 5.09 1294.83 5.06 0.00 0.00 97778.94 10236.10 81888.79 00:32:06.686 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:06.686 Verification LBA range: start 0x80000 length 0x80000 00:32:06.686 Nvme2n2 : 5.11 1278.65 4.99 0.00 0.00 98729.18 13356.86 87381.33 00:32:06.686 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:06.686 Verification LBA range: start 0x0 length 0x80000 00:32:06.686 Nvme2n3 : 5.09 1294.18 5.06 0.00 0.00 97662.94 10735.42 85883.37 00:32:06.686 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:06.686 Verification LBA range: start 0x80000 length 0x80000 00:32:06.686 Nvme2n3 : 5.09 1282.06 5.01 0.00 0.00 99590.43 23218.47 87381.33 00:32:06.686 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:06.686 Verification LBA range: start 0x0 length 0x20000 00:32:06.686 Nvme3n1 : 5.11 1303.32 5.09 0.00 0.00 97061.90 8987.79 88879.30 00:32:06.686 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:06.686 Verification LBA range: start 0x20000 length 0x20000 00:32:06.686 Nvme3n1 : 5.09 1281.35 5.01 0.00 0.00 99426.93 24716.43 85883.37 00:32:06.686 [2024-11-20T13:53:04.009Z] =================================================================================================================== 00:32:06.686 [2024-11-20T13:53:04.009Z] Total : 18026.58 70.42 0.00 0.00 98546.47 8987.79 88879.30 00:32:08.586 00:32:08.586 real 0m7.831s 00:32:08.586 user 0m14.474s 00:32:08.586 sys 0m0.313s 00:32:08.586 13:53:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:08.586 13:53:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:32:08.586 ************************************ 00:32:08.586 END TEST bdev_verify 00:32:08.586 ************************************ 00:32:08.586 13:53:05 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:08.586 13:53:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:32:08.586 13:53:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:08.586 13:53:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:32:08.586 ************************************ 00:32:08.586 START TEST bdev_verify_big_io 00:32:08.586 ************************************ 00:32:08.586 13:53:05 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:08.586 [2024-11-20 13:53:05.595038] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:08.586 [2024-11-20 13:53:05.595212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63637 ] 00:32:08.586 [2024-11-20 13:53:05.776431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:08.586 [2024-11-20 13:53:05.898403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.586 [2024-11-20 13:53:05.898421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.522 Running I/O for 5 seconds... 00:32:14.204 2482.00 IOPS, 155.12 MiB/s [2024-11-20T13:53:12.465Z] 2951.50 IOPS, 184.47 MiB/s [2024-11-20T13:53:12.724Z] 2763.33 IOPS, 172.71 MiB/s [2024-11-20T13:53:12.724Z] 2855.25 IOPS, 178.45 MiB/s 00:32:15.402 Latency(us) 00:32:15.402 [2024-11-20T13:53:12.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.402 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:15.402 Verification LBA range: start 0x0 length 0xbd0b 00:32:15.402 Nvme0n1 : 5.68 126.68 7.92 0.00 0.00 964562.58 27587.54 938725.18 00:32:15.402 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:15.402 Verification LBA range: start 0xbd0b length 0xbd0b 00:32:15.402 Nvme0n1 : 5.72 134.53 8.41 0.00 0.00 908006.17 17725.93 942719.76 00:32:15.402 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:15.402 Verification LBA range: start 0x0 length 0x4ff8 00:32:15.402 Nvme1n1p1 : 5.75 125.16 7.82 0.00 0.00 963840.73 71902.35 1158426.82 00:32:15.402 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:15.402 Verification LBA range: start 0x4ff8 length 0x4ff8 00:32:15.402 Nvme1n1p1 : 5.76 138.27 8.64 0.00 0.00 871021.01 76396.25 806904.20 00:32:15.402 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:15.402 Verification LBA range: start 0x0 length 0x4ff7 00:32:15.402 Nvme1n1p2 : 5.69 135.02 8.44 0.00 0.00 879684.59 111348.78 962692.63 00:32:15.402 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:15.402 Verification LBA range: start 0x4ff7 length 0x4ff7 00:32:15.402 Nvme1n1p2 : 5.77 137.71 8.61 0.00 0.00 858221.16 91875.23 1006632.96 00:32:15.402 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:15.402 Verification LBA range: start 0x0 length 0x8000 00:32:15.402 Nvme2n1 : 5.76 138.05 8.63 0.00 0.00 842223.21 67907.78 974676.36 00:32:15.402 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:15.402 Verification LBA range: start 0x8000 length 0x8000 00:32:15.402 Nvme2n1 : 5.80 135.86 8.49 0.00 0.00 851638.34 44439.65 1541906.04 00:32:15.402 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:15.402 Verification LBA range: start 0x0 length 0x8000 00:32:15.402 Nvme2n2 : 5.81 143.62 8.98 0.00 0.00 792063.22 12732.71 854839.10 00:32:15.402 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:15.402 Verification LBA range: start 0x8000 length 0x8000 00:32:15.402 Nvme2n2 : 5.84 140.70 8.79 0.00 0.00 804856.43 32705.58 1557884.34 00:32:15.402 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:15.402 Verification LBA range: start 0x0 length 0x8000 00:32:15.402 Nvme2n3 : 5.81 148.73 9.30 0.00 0.00 750980.43 32705.58 882801.13 00:32:15.402 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:15.402 Verification LBA range: start 0x8000 length 0x8000 00:32:15.402 Nvme2n3 : 5.84 145.10 9.07 0.00 0.00 764435.90 30583.47 1589840.94 00:32:15.402 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:15.402 Verification LBA range: start 0x0 length 0x2000 00:32:15.402 Nvme3n1 : 5.86 163.70 10.23 0.00 0.00 666908.02 8613.30 910763.15 00:32:15.402 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:15.402 Verification LBA range: start 0x2000 length 0x2000 00:32:15.402 Nvme3n1 : 5.88 160.45 10.03 0.00 0.00 675552.19 15978.30 1621797.55 00:32:15.402 [2024-11-20T13:53:12.725Z] =================================================================================================================== 00:32:15.402 [2024-11-20T13:53:12.725Z] Total : 1973.57 123.35 0.00 0.00 820789.76 8613.30 1621797.55 00:32:17.375 00:32:17.375 real 0m9.200s 00:32:17.375 user 0m17.152s 00:32:17.375 sys 0m0.359s 00:32:17.375 13:53:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:17.375 13:53:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:32:17.375 ************************************ 00:32:17.375 END TEST bdev_verify_big_io 00:32:17.375 ************************************ 00:32:17.633 13:53:14 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:17.633 13:53:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:32:17.633 13:53:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:17.633 13:53:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:32:17.633 ************************************ 00:32:17.633 START TEST bdev_write_zeroes 00:32:17.633 ************************************ 00:32:17.633 13:53:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:17.633 [2024-11-20 13:53:14.871407] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:17.633 [2024-11-20 13:53:14.871606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63752 ] 00:32:17.892 [2024-11-20 13:53:15.072856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.150 [2024-11-20 13:53:15.247574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.717 Running I/O for 1 seconds... 00:32:20.089 55104.00 IOPS, 215.25 MiB/s 00:32:20.089 Latency(us) 00:32:20.089 [2024-11-20T13:53:17.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.089 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:20.089 Nvme0n1 : 1.03 7812.72 30.52 0.00 0.00 16338.58 13107.20 34702.87 00:32:20.089 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:20.089 Nvme1n1p1 : 1.03 7800.04 30.47 0.00 0.00 16338.94 13232.03 33704.23 00:32:20.089 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:20.089 Nvme1n1p2 : 1.04 7787.50 30.42 0.00 0.00 16300.14 12982.37 32705.58 00:32:20.089 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:20.089 Nvme2n1 : 1.04 7775.74 30.37 0.00 0.00 16214.10 13107.20 31706.94 00:32:20.089 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:20.089 Nvme2n2 : 1.04 7764.25 30.33 0.00 0.00 16174.94 10860.25 30957.96 00:32:20.089 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:20.089 Nvme2n3 : 1.04 7752.59 30.28 0.00 0.00 16154.72 9736.78 32455.92 00:32:20.089 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:20.089 Nvme3n1 : 1.04 7741.13 30.24 0.00 0.00 16149.39 8925.38 34702.87 00:32:20.089 [2024-11-20T13:53:17.412Z] =================================================================================================================== 00:32:20.089 [2024-11-20T13:53:17.412Z] Total : 54433.96 212.63 0.00 0.00 16238.69 8925.38 34702.87 00:32:21.034 00:32:21.034 real 0m3.594s 00:32:21.034 user 0m3.186s 00:32:21.034 sys 0m0.288s 00:32:21.034 13:53:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.034 13:53:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:32:21.034 ************************************ 00:32:21.034 END TEST bdev_write_zeroes 00:32:21.034 ************************************ 00:32:21.293 13:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:21.293 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:32:21.293 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:21.293 13:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:32:21.293 ************************************ 00:32:21.293 START TEST bdev_json_nonenclosed 00:32:21.293 ************************************ 00:32:21.293 13:53:18 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:21.293 [2024-11-20 13:53:18.516468] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:21.293 [2024-11-20 13:53:18.517231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63810 ] 00:32:21.552 [2024-11-20 13:53:18.709074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.552 [2024-11-20 13:53:18.834519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.552 [2024-11-20 13:53:18.834623] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:21.552 [2024-11-20 13:53:18.834646] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:21.552 [2024-11-20 13:53:18.834658] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:21.810 00:32:21.810 real 0m0.692s 00:32:21.810 user 0m0.434s 00:32:21.810 sys 0m0.151s 00:32:21.810 ************************************ 00:32:21.810 END TEST bdev_json_nonenclosed 00:32:21.810 ************************************ 00:32:21.810 13:53:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.810 13:53:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:32:22.067 13:53:19 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:22.067 13:53:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:32:22.067 13:53:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.067 13:53:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:32:22.067 ************************************ 00:32:22.067 START TEST bdev_json_nonarray 00:32:22.067 ************************************ 00:32:22.067 13:53:19 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:22.067 [2024-11-20 13:53:19.270009] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:22.067 [2024-11-20 13:53:19.270196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63837 ] 00:32:22.325 [2024-11-20 13:53:19.467001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.325 [2024-11-20 13:53:19.590017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.325 [2024-11-20 13:53:19.590121] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:22.325 [2024-11-20 13:53:19.590143] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:22.325 [2024-11-20 13:53:19.590156] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:22.583 00:32:22.583 real 0m0.692s 00:32:22.583 user 0m0.433s 00:32:22.583 sys 0m0.154s 00:32:22.583 ************************************ 00:32:22.583 END TEST bdev_json_nonarray 00:32:22.583 ************************************ 00:32:22.583 13:53:19 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.583 13:53:19 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:32:22.583 13:53:19 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:32:22.583 13:53:19 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:32:22.583 13:53:19 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:32:22.583 13:53:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:22.583 13:53:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.583 13:53:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:32:22.583 ************************************ 00:32:22.583 START TEST bdev_gpt_uuid 00:32:22.583 ************************************ 00:32:22.583 13:53:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:32:22.583 13:53:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:32:22.583 13:53:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:32:22.842 13:53:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63868 00:32:22.842 13:53:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:22.842 13:53:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:32:22.842 13:53:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63868 00:32:22.842 13:53:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63868 ']' 00:32:22.842 13:53:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.842 13:53:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:22.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.842 13:53:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.842 13:53:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:22.842 13:53:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:32:22.842 [2024-11-20 13:53:20.053536] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:22.842 [2024-11-20 13:53:20.053711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63868 ] 00:32:23.101 [2024-11-20 13:53:20.246024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.101 [2024-11-20 13:53:20.360319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.039 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:24.039 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:32:24.039 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:24.039 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.039 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:32:24.298 Some configs were skipped because the RPC state that can call them passed over. 00:32:24.298 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.298 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:32:24.298 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.298 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:32:24.298 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.298 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:32:24.298 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.298 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:32:24.298 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.298 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:32:24.298 { 00:32:24.298 "name": "Nvme1n1p1", 00:32:24.298 "aliases": [ 00:32:24.298 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:32:24.298 ], 00:32:24.298 "product_name": "GPT Disk", 00:32:24.298 "block_size": 4096, 00:32:24.298 "num_blocks": 655104, 00:32:24.298 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:32:24.298 "assigned_rate_limits": { 00:32:24.298 "rw_ios_per_sec": 0, 00:32:24.298 "rw_mbytes_per_sec": 0, 00:32:24.298 "r_mbytes_per_sec": 0, 00:32:24.298 "w_mbytes_per_sec": 0 00:32:24.298 }, 00:32:24.298 "claimed": false, 00:32:24.298 "zoned": false, 00:32:24.298 "supported_io_types": { 00:32:24.298 "read": true, 00:32:24.298 "write": true, 00:32:24.298 "unmap": true, 00:32:24.298 "flush": true, 00:32:24.298 "reset": true, 00:32:24.298 "nvme_admin": false, 00:32:24.298 "nvme_io": false, 00:32:24.298 "nvme_io_md": false, 00:32:24.298 "write_zeroes": true, 00:32:24.298 "zcopy": false, 00:32:24.298 "get_zone_info": false, 00:32:24.298 "zone_management": false, 00:32:24.298 "zone_append": false, 00:32:24.298 "compare": true, 00:32:24.298 "compare_and_write": false, 00:32:24.298 "abort": true, 00:32:24.298 "seek_hole": false, 00:32:24.298 "seek_data": false, 00:32:24.298 "copy": true, 00:32:24.298 "nvme_iov_md": false 00:32:24.298 }, 00:32:24.298 "driver_specific": { 00:32:24.298 "gpt": { 00:32:24.298 "base_bdev": "Nvme1n1", 00:32:24.298 "offset_blocks": 256, 00:32:24.298 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:32:24.298 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:32:24.299 "partition_name": "SPDK_TEST_first" 00:32:24.299 } 00:32:24.299 } 00:32:24.299 } 00:32:24.299 ]' 00:32:24.299 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:32:24.558 { 00:32:24.558 "name": "Nvme1n1p2", 00:32:24.558 "aliases": [ 00:32:24.558 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:32:24.558 ], 00:32:24.558 "product_name": "GPT Disk", 00:32:24.558 "block_size": 4096, 00:32:24.558 "num_blocks": 655103, 00:32:24.558 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:32:24.558 "assigned_rate_limits": { 00:32:24.558 "rw_ios_per_sec": 0, 00:32:24.558 "rw_mbytes_per_sec": 0, 00:32:24.558 "r_mbytes_per_sec": 0, 00:32:24.558 "w_mbytes_per_sec": 0 00:32:24.558 }, 00:32:24.558 "claimed": false, 00:32:24.558 "zoned": false, 00:32:24.558 "supported_io_types": { 00:32:24.558 "read": true, 00:32:24.558 "write": true, 00:32:24.558 "unmap": true, 00:32:24.558 "flush": true, 00:32:24.558 "reset": true, 00:32:24.558 "nvme_admin": false, 00:32:24.558 "nvme_io": false, 00:32:24.558 "nvme_io_md": false, 00:32:24.558 "write_zeroes": true, 00:32:24.558 "zcopy": false, 00:32:24.558 "get_zone_info": false, 00:32:24.558 "zone_management": false, 00:32:24.558 "zone_append": false, 00:32:24.558 "compare": true, 00:32:24.558 "compare_and_write": false, 00:32:24.558 "abort": true, 00:32:24.558 "seek_hole": false, 00:32:24.558 "seek_data": false, 00:32:24.558 "copy": true, 00:32:24.558 "nvme_iov_md": false 00:32:24.558 }, 00:32:24.558 "driver_specific": { 00:32:24.558 "gpt": { 00:32:24.558 "base_bdev": "Nvme1n1", 00:32:24.558 "offset_blocks": 655360, 00:32:24.558 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:32:24.558 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:32:24.558 "partition_name": "SPDK_TEST_second" 00:32:24.558 } 00:32:24.558 } 00:32:24.558 } 00:32:24.558 ]' 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:32:24.558 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:32:24.817 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:32:24.817 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63868 00:32:24.817 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63868 ']' 00:32:24.817 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63868 00:32:24.817 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:32:24.817 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:24.817 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63868 00:32:24.817 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:24.817 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:24.817 killing process with pid 63868 00:32:24.817 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63868' 00:32:24.817 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63868 00:32:24.817 13:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63868 00:32:27.352 00:32:27.352 real 0m4.514s 00:32:27.352 user 0m4.629s 00:32:27.352 sys 0m0.562s 00:32:27.352 13:53:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.352 ************************************ 00:32:27.352 END TEST bdev_gpt_uuid 00:32:27.352 ************************************ 00:32:27.352 13:53:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:32:27.352 13:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:32:27.352 13:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:32:27.352 13:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:32:27.352 13:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:32:27.352 13:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:27.352 13:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:32:27.352 13:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:32:27.352 13:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:32:27.352 13:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:27.615 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:27.873 Waiting for block devices as requested 00:32:27.873 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:28.132 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:28.132 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:28.390 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:33.659 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:33.659 13:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:32:33.659 13:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:32:33.659 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:32:33.659 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:32:33.659 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:32:33.659 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:32:33.659 13:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:32:33.659 00:32:33.659 real 1m7.754s 00:32:33.659 user 1m25.395s 00:32:33.659 sys 0m12.163s 00:32:33.659 13:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:33.659 ************************************ 00:32:33.659 END TEST blockdev_nvme_gpt 00:32:33.659 13:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:32:33.659 ************************************ 00:32:33.659 13:53:30 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:32:33.659 13:53:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:33.659 13:53:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:33.659 13:53:30 -- common/autotest_common.sh@10 -- # set +x 00:32:33.659 ************************************ 00:32:33.659 START TEST nvme 00:32:33.659 ************************************ 00:32:33.659 13:53:30 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:32:33.988 * Looking for test storage... 00:32:33.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:33.988 13:53:31 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:33.988 13:53:31 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:32:33.988 13:53:31 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:33.988 13:53:31 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:33.988 13:53:31 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:33.988 13:53:31 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:33.988 13:53:31 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:33.988 13:53:31 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:32:33.988 13:53:31 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:32:33.988 13:53:31 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:32:33.988 13:53:31 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:32:33.988 13:53:31 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:32:33.988 13:53:31 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:32:33.988 13:53:31 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:32:33.988 13:53:31 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:33.988 13:53:31 nvme -- scripts/common.sh@344 -- # case "$op" in 00:32:33.988 13:53:31 nvme -- scripts/common.sh@345 -- # : 1 00:32:33.988 13:53:31 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:33.988 13:53:31 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:33.988 13:53:31 nvme -- scripts/common.sh@365 -- # decimal 1 00:32:33.988 13:53:31 nvme -- scripts/common.sh@353 -- # local d=1 00:32:33.988 13:53:31 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:33.988 13:53:31 nvme -- scripts/common.sh@355 -- # echo 1 00:32:33.988 13:53:31 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:32:33.988 13:53:31 nvme -- scripts/common.sh@366 -- # decimal 2 00:32:33.988 13:53:31 nvme -- scripts/common.sh@353 -- # local d=2 00:32:33.988 13:53:31 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:33.988 13:53:31 nvme -- scripts/common.sh@355 -- # echo 2 00:32:33.988 13:53:31 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:32:33.988 13:53:31 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:33.988 13:53:31 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:33.988 13:53:31 nvme -- scripts/common.sh@368 -- # return 0 00:32:33.988 13:53:31 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:33.988 13:53:31 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:33.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.988 --rc genhtml_branch_coverage=1 00:32:33.988 --rc genhtml_function_coverage=1 00:32:33.988 --rc genhtml_legend=1 00:32:33.988 --rc geninfo_all_blocks=1 00:32:33.988 --rc geninfo_unexecuted_blocks=1 00:32:33.988 00:32:33.988 ' 00:32:33.988 13:53:31 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:33.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.988 --rc genhtml_branch_coverage=1 00:32:33.988 --rc genhtml_function_coverage=1 00:32:33.988 --rc genhtml_legend=1 00:32:33.988 --rc geninfo_all_blocks=1 00:32:33.988 --rc geninfo_unexecuted_blocks=1 00:32:33.988 00:32:33.988 ' 00:32:33.988 13:53:31 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:33.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.988 --rc genhtml_branch_coverage=1 00:32:33.988 --rc genhtml_function_coverage=1 00:32:33.988 --rc genhtml_legend=1 00:32:33.988 --rc geninfo_all_blocks=1 00:32:33.988 --rc geninfo_unexecuted_blocks=1 00:32:33.988 00:32:33.988 ' 00:32:33.988 13:53:31 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:33.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.988 --rc genhtml_branch_coverage=1 00:32:33.988 --rc genhtml_function_coverage=1 00:32:33.988 --rc genhtml_legend=1 00:32:33.988 --rc geninfo_all_blocks=1 00:32:33.988 --rc geninfo_unexecuted_blocks=1 00:32:33.988 00:32:33.988 ' 00:32:33.988 13:53:31 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:34.555 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:35.491 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:35.491 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:32:35.491 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:35.491 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:32:35.491 13:53:32 nvme -- nvme/nvme.sh@79 -- # uname 00:32:35.491 13:53:32 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:32:35.491 13:53:32 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:32:35.491 13:53:32 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:32:35.491 13:53:32 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:32:35.491 13:53:32 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:32:35.491 13:53:32 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:32:35.491 13:53:32 nvme -- common/autotest_common.sh@1075 -- # stubpid=64528 00:32:35.491 Waiting for stub to ready for secondary processes... 00:32:35.491 13:53:32 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:32:35.491 13:53:32 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:32:35.491 13:53:32 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64528 ]] 00:32:35.491 13:53:32 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:32:35.491 13:53:32 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:32:35.491 [2024-11-20 13:53:32.713927] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:35.491 [2024-11-20 13:53:32.714117] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:32:36.427 13:53:33 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:32:36.427 13:53:33 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64528 ]] 00:32:36.427 13:53:33 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:32:36.427 [2024-11-20 13:53:33.743564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:36.686 [2024-11-20 13:53:33.852305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:36.686 [2024-11-20 13:53:33.852346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.686 [2024-11-20 13:53:33.852364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:36.686 [2024-11-20 13:53:33.871934] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:32:36.686 [2024-11-20 13:53:33.871985] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:32:36.687 [2024-11-20 13:53:33.882910] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:32:36.687 [2024-11-20 13:53:33.883082] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:32:36.687 [2024-11-20 13:53:33.886906] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:32:36.687 [2024-11-20 13:53:33.887162] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:32:36.687 [2024-11-20 13:53:33.887259] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:32:36.687 [2024-11-20 13:53:33.890993] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:32:36.687 [2024-11-20 13:53:33.891243] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:32:36.687 [2024-11-20 13:53:33.891344] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:32:36.687 [2024-11-20 13:53:33.895224] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:32:36.687 [2024-11-20 13:53:33.895447] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:32:36.687 [2024-11-20 13:53:33.895536] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:32:36.687 [2024-11-20 13:53:33.895585] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:32:36.687 [2024-11-20 13:53:33.895633] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:32:37.622 13:53:34 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:32:37.622 done. 00:32:37.622 13:53:34 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:32:37.622 13:53:34 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:32:37.622 13:53:34 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:32:37.622 13:53:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:37.622 13:53:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:37.622 ************************************ 00:32:37.622 START TEST nvme_reset 00:32:37.622 ************************************ 00:32:37.622 13:53:34 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:32:37.882 Initializing NVMe Controllers 00:32:37.882 Skipping QEMU NVMe SSD at 0000:00:10.0 00:32:37.882 Skipping QEMU NVMe SSD at 0000:00:11.0 00:32:37.882 Skipping QEMU NVMe SSD at 0000:00:13.0 00:32:37.882 Skipping QEMU NVMe SSD at 0000:00:12.0 00:32:37.882 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:32:37.882 00:32:37.882 real 0m0.349s 00:32:37.882 user 0m0.104s 00:32:37.882 sys 0m0.192s 00:32:37.882 13:53:35 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:37.882 ************************************ 00:32:37.882 END TEST nvme_reset 00:32:37.882 ************************************ 00:32:37.882 13:53:35 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:32:37.882 13:53:35 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:32:37.882 13:53:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:37.882 13:53:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:37.882 13:53:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:37.882 ************************************ 00:32:37.882 START TEST nvme_identify 00:32:37.882 ************************************ 00:32:37.882 13:53:35 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:32:37.882 13:53:35 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:32:37.882 13:53:35 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:32:37.882 13:53:35 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:32:37.882 13:53:35 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:32:37.882 13:53:35 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:37.882 13:53:35 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:32:37.882 13:53:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:37.882 13:53:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:37.882 13:53:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:37.882 13:53:35 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:32:37.882 13:53:35 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:37.882 13:53:35 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:32:38.456 [2024-11-20 13:53:35.468384] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64562 terminated unexpected 00:32:38.456 ===================================================== 00:32:38.456 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:38.456 ===================================================== 00:32:38.456 Controller Capabilities/Features 00:32:38.456 ================================ 00:32:38.456 Vendor ID: 1b36 00:32:38.456 Subsystem Vendor ID: 1af4 00:32:38.456 Serial Number: 12340 00:32:38.456 Model Number: QEMU NVMe Ctrl 00:32:38.456 Firmware Version: 8.0.0 00:32:38.456 Recommended Arb Burst: 6 00:32:38.456 IEEE OUI Identifier: 00 54 52 00:32:38.456 Multi-path I/O 00:32:38.456 May have multiple subsystem ports: No 00:32:38.456 May have multiple controllers: No 00:32:38.456 Associated with SR-IOV VF: No 00:32:38.456 Max Data Transfer Size: 524288 00:32:38.456 Max Number of Namespaces: 256 00:32:38.456 Max Number of I/O Queues: 64 00:32:38.456 NVMe Specification Version (VS): 1.4 00:32:38.456 NVMe Specification Version (Identify): 1.4 00:32:38.456 Maximum Queue Entries: 2048 00:32:38.456 Contiguous Queues Required: Yes 00:32:38.456 Arbitration Mechanisms Supported 00:32:38.456 Weighted Round Robin: Not Supported 00:32:38.456 Vendor Specific: Not Supported 00:32:38.456 Reset Timeout: 7500 ms 00:32:38.456 Doorbell Stride: 4 bytes 00:32:38.456 NVM Subsystem Reset: Not Supported 00:32:38.456 Command Sets Supported 00:32:38.456 NVM Command Set: Supported 00:32:38.456 Boot Partition: Not Supported 00:32:38.456 Memory Page Size Minimum: 4096 bytes 00:32:38.456 Memory Page Size Maximum: 65536 bytes 00:32:38.456 Persistent Memory Region: Not Supported 00:32:38.456 Optional Asynchronous Events Supported 00:32:38.456 Namespace Attribute Notices: Supported 00:32:38.456 Firmware Activation Notices: Not Supported 00:32:38.456 ANA Change Notices: Not Supported 00:32:38.456 PLE Aggregate Log Change Notices: Not Supported 00:32:38.456 LBA Status Info Alert Notices: Not Supported 00:32:38.456 EGE Aggregate Log Change Notices: Not Supported 00:32:38.456 Normal NVM Subsystem Shutdown event: Not Supported 00:32:38.456 Zone Descriptor Change Notices: Not Supported 00:32:38.456 Discovery Log Change Notices: Not Supported 00:32:38.456 Controller Attributes 00:32:38.456 128-bit Host Identifier: Not Supported 00:32:38.456 Non-Operational Permissive Mode: Not Supported 00:32:38.456 NVM Sets: Not Supported 00:32:38.456 Read Recovery Levels: Not Supported 00:32:38.456 Endurance Groups: Not Supported 00:32:38.456 Predictable Latency Mode: Not Supported 00:32:38.456 Traffic Based Keep ALive: Not Supported 00:32:38.456 Namespace Granularity: Not Supported 00:32:38.456 SQ Associations: Not Supported 00:32:38.456 UUID List: Not Supported 00:32:38.456 Multi-Domain Subsystem: Not Supported 00:32:38.456 Fixed Capacity Management: Not Supported 00:32:38.456 Variable Capacity Management: Not Supported 00:32:38.456 Delete Endurance Group: Not Supported 00:32:38.456 Delete NVM Set: Not Supported 00:32:38.456 Extended LBA Formats Supported: Supported 00:32:38.456 Flexible Data Placement Supported: Not Supported 00:32:38.456 00:32:38.456 Controller Memory Buffer Support 00:32:38.456 ================================ 00:32:38.456 Supported: No 00:32:38.456 00:32:38.456 Persistent Memory Region Support 00:32:38.456 ================================ 00:32:38.456 Supported: No 00:32:38.456 00:32:38.456 Admin Command Set Attributes 00:32:38.456 ============================ 00:32:38.456 Security Send/Receive: Not Supported 00:32:38.456 Format NVM: Supported 00:32:38.456 Firmware Activate/Download: Not Supported 00:32:38.456 Namespace Management: Supported 00:32:38.456 Device Self-Test: Not Supported 00:32:38.456 Directives: Supported 00:32:38.456 NVMe-MI: Not Supported 00:32:38.456 Virtualization Management: Not Supported 00:32:38.456 Doorbell Buffer Config: Supported 00:32:38.456 Get LBA Status Capability: Not Supported 00:32:38.456 Command & Feature Lockdown Capability: Not Supported 00:32:38.456 Abort Command Limit: 4 00:32:38.456 Async Event Request Limit: 4 00:32:38.456 Number of Firmware Slots: N/A 00:32:38.456 Firmware Slot 1 Read-Only: N/A 00:32:38.456 Firmware Activation Without Reset: N/A 00:32:38.456 Multiple Update Detection Support: N/A 00:32:38.456 Firmware Update Granularity: No Information Provided 00:32:38.456 Per-Namespace SMART Log: Yes 00:32:38.456 Asymmetric Namespace Access Log Page: Not Supported 00:32:38.456 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:32:38.456 Command Effects Log Page: Supported 00:32:38.456 Get Log Page Extended Data: Supported 00:32:38.456 Telemetry Log Pages: Not Supported 00:32:38.456 Persistent Event Log Pages: Not Supported 00:32:38.456 Supported Log Pages Log Page: May Support 00:32:38.456 Commands Supported & Effects Log Page: Not Supported 00:32:38.456 Feature Identifiers & Effects Log Page:May Support 00:32:38.456 NVMe-MI Commands & Effects Log Page: May Support 00:32:38.456 Data Area 4 for Telemetry Log: Not Supported 00:32:38.456 Error Log Page Entries Supported: 1 00:32:38.456 Keep Alive: Not Supported 00:32:38.456 00:32:38.456 NVM Command Set Attributes 00:32:38.456 ========================== 00:32:38.456 Submission Queue Entry Size 00:32:38.456 Max: 64 00:32:38.456 Min: 64 00:32:38.456 Completion Queue Entry Size 00:32:38.456 Max: 16 00:32:38.456 Min: 16 00:32:38.456 Number of Namespaces: 256 00:32:38.456 Compare Command: Supported 00:32:38.456 Write Uncorrectable Command: Not Supported 00:32:38.456 Dataset Management Command: Supported 00:32:38.456 Write Zeroes Command: Supported 00:32:38.456 Set Features Save Field: Supported 00:32:38.456 Reservations: Not Supported 00:32:38.456 Timestamp: Supported 00:32:38.456 Copy: Supported 00:32:38.456 Volatile Write Cache: Present 00:32:38.456 Atomic Write Unit (Normal): 1 00:32:38.456 Atomic Write Unit (PFail): 1 00:32:38.456 Atomic Compare & Write Unit: 1 00:32:38.456 Fused Compare & Write: Not Supported 00:32:38.456 Scatter-Gather List 00:32:38.456 SGL Command Set: Supported 00:32:38.457 SGL Keyed: Not Supported 00:32:38.457 SGL Bit Bucket Descriptor: Not Supported 00:32:38.457 SGL Metadata Pointer: Not Supported 00:32:38.457 Oversized SGL: Not Supported 00:32:38.457 SGL Metadata Address: Not Supported 00:32:38.457 SGL Offset: Not Supported 00:32:38.457 Transport SGL Data Block: Not Supported 00:32:38.457 Replay Protected Memory Block: Not Supported 00:32:38.457 00:32:38.457 Firmware Slot Information 00:32:38.457 ========================= 00:32:38.457 Active slot: 1 00:32:38.457 Slot 1 Firmware Revision: 1.0 00:32:38.457 00:32:38.457 00:32:38.457 Commands Supported and Effects 00:32:38.457 ============================== 00:32:38.457 Admin Commands 00:32:38.457 -------------- 00:32:38.457 Delete I/O Submission Queue (00h): Supported 00:32:38.457 Create I/O Submission Queue (01h): Supported 00:32:38.457 Get Log Page (02h): Supported 00:32:38.457 Delete I/O Completion Queue (04h): Supported 00:32:38.457 Create I/O Completion Queue (05h): Supported 00:32:38.457 Identify (06h): Supported 00:32:38.457 Abort (08h): Supported 00:32:38.457 Set Features (09h): Supported 00:32:38.457 Get Features (0Ah): Supported 00:32:38.457 Asynchronous Event Request (0Ch): Supported 00:32:38.457 Namespace Attachment (15h): Supported NS-Inventory-Change 00:32:38.457 Directive Send (19h): Supported 00:32:38.457 Directive Receive (1Ah): Supported 00:32:38.457 Virtualization Management (1Ch): Supported 00:32:38.457 Doorbell Buffer Config (7Ch): Supported 00:32:38.457 Format NVM (80h): Supported LBA-Change 00:32:38.457 I/O Commands 00:32:38.457 ------------ 00:32:38.457 Flush (00h): Supported LBA-Change 00:32:38.457 Write (01h): Supported LBA-Change 00:32:38.457 Read (02h): Supported 00:32:38.457 Compare (05h): Supported 00:32:38.457 Write Zeroes (08h): Supported LBA-Change 00:32:38.457 Dataset Management (09h): Supported LBA-Change 00:32:38.457 Unknown (0Ch): Supported 00:32:38.457 Unknown (12h): Supported 00:32:38.457 Copy (19h): Supported LBA-Change 00:32:38.457 Unknown (1Dh): Supported LBA-Change 00:32:38.457 00:32:38.457 Error Log 00:32:38.457 ========= 00:32:38.457 00:32:38.457 Arbitration 00:32:38.457 =========== 00:32:38.457 Arbitration Burst: no limit 00:32:38.457 00:32:38.457 Power Management 00:32:38.457 ================ 00:32:38.457 Number of Power States: 1 00:32:38.457 Current Power State: Power State #0 00:32:38.457 Power State #0: 00:32:38.457 Max Power: 25.00 W 00:32:38.457 Non-Operational State: Operational 00:32:38.457 Entry Latency: 16 microseconds 00:32:38.457 Exit Latency: 4 microseconds 00:32:38.457 Relative Read Throughput: 0 00:32:38.457 Relative Read Latency: 0 00:32:38.457 Relative Write Throughput: 0 00:32:38.457 Relative Write Latency: 0 00:32:38.457 Idle Power[2024-11-20 13:53:35.469745] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64562 terminated unexpected 00:32:38.457 : Not Reported 00:32:38.457 Active Power: Not Reported 00:32:38.457 Non-Operational Permissive Mode: Not Supported 00:32:38.457 00:32:38.457 Health Information 00:32:38.457 ================== 00:32:38.457 Critical Warnings: 00:32:38.457 Available Spare Space: OK 00:32:38.457 Temperature: OK 00:32:38.457 Device Reliability: OK 00:32:38.457 Read Only: No 00:32:38.457 Volatile Memory Backup: OK 00:32:38.457 Current Temperature: 323 Kelvin (50 Celsius) 00:32:38.457 Temperature Threshold: 343 Kelvin (70 Celsius) 00:32:38.457 Available Spare: 0% 00:32:38.457 Available Spare Threshold: 0% 00:32:38.457 Life Percentage Used: 0% 00:32:38.457 Data Units Read: 685 00:32:38.457 Data Units Written: 613 00:32:38.457 Host Read Commands: 31173 00:32:38.457 Host Write Commands: 30959 00:32:38.457 Controller Busy Time: 0 minutes 00:32:38.457 Power Cycles: 0 00:32:38.457 Power On Hours: 0 hours 00:32:38.457 Unsafe Shutdowns: 0 00:32:38.457 Unrecoverable Media Errors: 0 00:32:38.457 Lifetime Error Log Entries: 0 00:32:38.457 Warning Temperature Time: 0 minutes 00:32:38.457 Critical Temperature Time: 0 minutes 00:32:38.457 00:32:38.457 Number of Queues 00:32:38.457 ================ 00:32:38.457 Number of I/O Submission Queues: 64 00:32:38.457 Number of I/O Completion Queues: 64 00:32:38.457 00:32:38.457 ZNS Specific Controller Data 00:32:38.457 ============================ 00:32:38.457 Zone Append Size Limit: 0 00:32:38.457 00:32:38.457 00:32:38.457 Active Namespaces 00:32:38.457 ================= 00:32:38.457 Namespace ID:1 00:32:38.457 Error Recovery Timeout: Unlimited 00:32:38.457 Command Set Identifier: NVM (00h) 00:32:38.457 Deallocate: Supported 00:32:38.457 Deallocated/Unwritten Error: Supported 00:32:38.457 Deallocated Read Value: All 0x00 00:32:38.457 Deallocate in Write Zeroes: Not Supported 00:32:38.457 Deallocated Guard Field: 0xFFFF 00:32:38.457 Flush: Supported 00:32:38.457 Reservation: Not Supported 00:32:38.457 Metadata Transferred as: Separate Metadata Buffer 00:32:38.457 Namespace Sharing Capabilities: Private 00:32:38.457 Size (in LBAs): 1548666 (5GiB) 00:32:38.457 Capacity (in LBAs): 1548666 (5GiB) 00:32:38.457 Utilization (in LBAs): 1548666 (5GiB) 00:32:38.457 Thin Provisioning: Not Supported 00:32:38.457 Per-NS Atomic Units: No 00:32:38.457 Maximum Single Source Range Length: 128 00:32:38.457 Maximum Copy Length: 128 00:32:38.457 Maximum Source Range Count: 128 00:32:38.457 NGUID/EUI64 Never Reused: No 00:32:38.457 Namespace Write Protected: No 00:32:38.457 Number of LBA Formats: 8 00:32:38.457 Current LBA Format: LBA Format #07 00:32:38.457 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:38.457 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:38.457 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:38.457 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:38.457 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:38.457 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:38.457 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:38.457 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:38.457 00:32:38.457 NVM Specific Namespace Data 00:32:38.457 =========================== 00:32:38.457 Logical Block Storage Tag Mask: 0 00:32:38.457 Protection Information Capabilities: 00:32:38.457 16b Guard Protection Information Storage Tag Support: No 00:32:38.457 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:32:38.457 Storage Tag Check Read Support: No 00:32:38.457 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.457 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.457 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.457 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.457 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.457 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.457 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.457 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.457 ===================================================== 00:32:38.457 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:32:38.457 ===================================================== 00:32:38.457 Controller Capabilities/Features 00:32:38.457 ================================ 00:32:38.457 Vendor ID: 1b36 00:32:38.457 Subsystem Vendor ID: 1af4 00:32:38.457 Serial Number: 12341 00:32:38.457 Model Number: QEMU NVMe Ctrl 00:32:38.457 Firmware Version: 8.0.0 00:32:38.457 Recommended Arb Burst: 6 00:32:38.457 IEEE OUI Identifier: 00 54 52 00:32:38.457 Multi-path I/O 00:32:38.457 May have multiple subsystem ports: No 00:32:38.457 May have multiple controllers: No 00:32:38.457 Associated with SR-IOV VF: No 00:32:38.457 Max Data Transfer Size: 524288 00:32:38.457 Max Number of Namespaces: 256 00:32:38.457 Max Number of I/O Queues: 64 00:32:38.457 NVMe Specification Version (VS): 1.4 00:32:38.457 NVMe Specification Version (Identify): 1.4 00:32:38.457 Maximum Queue Entries: 2048 00:32:38.457 Contiguous Queues Required: Yes 00:32:38.457 Arbitration Mechanisms Supported 00:32:38.457 Weighted Round Robin: Not Supported 00:32:38.457 Vendor Specific: Not Supported 00:32:38.457 Reset Timeout: 7500 ms 00:32:38.457 Doorbell Stride: 4 bytes 00:32:38.457 NVM Subsystem Reset: Not Supported 00:32:38.457 Command Sets Supported 00:32:38.457 NVM Command Set: Supported 00:32:38.457 Boot Partition: Not Supported 00:32:38.457 Memory Page Size Minimum: 4096 bytes 00:32:38.457 Memory Page Size Maximum: 65536 bytes 00:32:38.457 Persistent Memory Region: Not Supported 00:32:38.457 Optional Asynchronous Events Supported 00:32:38.457 Namespace Attribute Notices: Supported 00:32:38.457 Firmware Activation Notices: Not Supported 00:32:38.457 ANA Change Notices: Not Supported 00:32:38.458 PLE Aggregate Log Change Notices: Not Supported 00:32:38.458 LBA Status Info Alert Notices: Not Supported 00:32:38.458 EGE Aggregate Log Change Notices: Not Supported 00:32:38.458 Normal NVM Subsystem Shutdown event: Not Supported 00:32:38.458 Zone Descriptor Change Notices: Not Supported 00:32:38.458 Discovery Log Change Notices: Not Supported 00:32:38.458 Controller Attributes 00:32:38.458 128-bit Host Identifier: Not Supported 00:32:38.458 Non-Operational Permissive Mode: Not Supported 00:32:38.458 NVM Sets: Not Supported 00:32:38.458 Read Recovery Levels: Not Supported 00:32:38.458 Endurance Groups: Not Supported 00:32:38.458 Predictable Latency Mode: Not Supported 00:32:38.458 Traffic Based Keep ALive: Not Supported 00:32:38.458 Namespace Granularity: Not Supported 00:32:38.458 SQ Associations: Not Supported 00:32:38.458 UUID List: Not Supported 00:32:38.458 Multi-Domain Subsystem: Not Supported 00:32:38.458 Fixed Capacity Management: Not Supported 00:32:38.458 Variable Capacity Management: Not Supported 00:32:38.458 Delete Endurance Group: Not Supported 00:32:38.458 Delete NVM Set: Not Supported 00:32:38.458 Extended LBA Formats Supported: Supported 00:32:38.458 Flexible Data Placement Supported: Not Supported 00:32:38.458 00:32:38.458 Controller Memory Buffer Support 00:32:38.458 ================================ 00:32:38.458 Supported: No 00:32:38.458 00:32:38.458 Persistent Memory Region Support 00:32:38.458 ================================ 00:32:38.458 Supported: No 00:32:38.458 00:32:38.458 Admin Command Set Attributes 00:32:38.458 ============================ 00:32:38.458 Security Send/Receive: Not Supported 00:32:38.458 Format NVM: Supported 00:32:38.458 Firmware Activate/Download: Not Supported 00:32:38.458 Namespace Management: Supported 00:32:38.458 Device Self-Test: Not Supported 00:32:38.458 Directives: Supported 00:32:38.458 NVMe-MI: Not Supported 00:32:38.458 Virtualization Management: Not Supported 00:32:38.458 Doorbell Buffer Config: Supported 00:32:38.458 Get LBA Status Capability: Not Supported 00:32:38.458 Command & Feature Lockdown Capability: Not Supported 00:32:38.458 Abort Command Limit: 4 00:32:38.458 Async Event Request Limit: 4 00:32:38.458 Number of Firmware Slots: N/A 00:32:38.458 Firmware Slot 1 Read-Only: N/A 00:32:38.458 Firmware Activation Without Reset: N/A 00:32:38.458 Multiple Update Detection Support: N/A 00:32:38.458 Firmware Update Granularity: No Information Provided 00:32:38.458 Per-Namespace SMART Log: Yes 00:32:38.458 Asymmetric Namespace Access Log Page: Not Supported 00:32:38.458 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:32:38.458 Command Effects Log Page: Supported 00:32:38.458 Get Log Page Extended Data: Supported 00:32:38.458 Telemetry Log Pages: Not Supported 00:32:38.458 Persistent Event Log Pages: Not Supported 00:32:38.458 Supported Log Pages Log Page: May Support 00:32:38.458 Commands Supported & Effects Log Page: Not Supported 00:32:38.458 Feature Identifiers & Effects Log Page:May Support 00:32:38.458 NVMe-MI Commands & Effects Log Page: May Support 00:32:38.458 Data Area 4 for Telemetry Log: Not Supported 00:32:38.458 Error Log Page Entries Supported: 1 00:32:38.458 Keep Alive: Not Supported 00:32:38.458 00:32:38.458 NVM Command Set Attributes 00:32:38.458 ========================== 00:32:38.458 Submission Queue Entry Size 00:32:38.458 Max: 64 00:32:38.458 Min: 64 00:32:38.458 Completion Queue Entry Size 00:32:38.458 Max: 16 00:32:38.458 Min: 16 00:32:38.458 Number of Namespaces: 256 00:32:38.458 Compare Command: Supported 00:32:38.458 Write Uncorrectable Command: Not Supported 00:32:38.458 Dataset Management Command: Supported 00:32:38.458 Write Zeroes Command: Supported 00:32:38.458 Set Features Save Field: Supported 00:32:38.458 Reservations: Not Supported 00:32:38.458 Timestamp: Supported 00:32:38.458 Copy: Supported 00:32:38.458 Volatile Write Cache: Present 00:32:38.458 Atomic Write Unit (Normal): 1 00:32:38.458 Atomic Write Unit (PFail): 1 00:32:38.458 Atomic Compare & Write Unit: 1 00:32:38.458 Fused Compare & Write: Not Supported 00:32:38.458 Scatter-Gather List 00:32:38.458 SGL Command Set: Supported 00:32:38.458 SGL Keyed: Not Supported 00:32:38.458 SGL Bit Bucket Descriptor: Not Supported 00:32:38.458 SGL Metadata Pointer: Not Supported 00:32:38.458 Oversized SGL: Not Supported 00:32:38.458 SGL Metadata Address: Not Supported 00:32:38.458 SGL Offset: Not Supported 00:32:38.458 Transport SGL Data Block: Not Supported 00:32:38.458 Replay Protected Memory Block: Not Supported 00:32:38.458 00:32:38.458 Firmware Slot Information 00:32:38.458 ========================= 00:32:38.458 Active slot: 1 00:32:38.458 Slot 1 Firmware Revision: 1.0 00:32:38.458 00:32:38.458 00:32:38.458 Commands Supported and Effects 00:32:38.458 ============================== 00:32:38.458 Admin Commands 00:32:38.458 -------------- 00:32:38.458 Delete I/O Submission Queue (00h): Supported 00:32:38.458 Create I/O Submission Queue (01h): Supported 00:32:38.458 Get Log Page (02h): Supported 00:32:38.458 Delete I/O Completion Queue (04h): Supported 00:32:38.458 Create I/O Completion Queue (05h): Supported 00:32:38.458 Identify (06h): Supported 00:32:38.458 Abort (08h): Supported 00:32:38.458 Set Features (09h): Supported 00:32:38.458 Get Features (0Ah): Supported 00:32:38.458 Asynchronous Event Request (0Ch): Supported 00:32:38.458 Namespace Attachment (15h): Supported NS-Inventory-Change 00:32:38.458 Directive Send (19h): Supported 00:32:38.458 Directive Receive (1Ah): Supported 00:32:38.458 Virtualization Management (1Ch): Supported 00:32:38.458 Doorbell Buffer Config (7Ch): Supported 00:32:38.458 Format NVM (80h): Supported LBA-Change 00:32:38.458 I/O Commands 00:32:38.458 ------------ 00:32:38.458 Flush (00h): Supported LBA-Change 00:32:38.458 Write (01h): Supported LBA-Change 00:32:38.458 Read (02h): Supported 00:32:38.458 Compare (05h): Supported 00:32:38.458 Write Zeroes (08h): Supported LBA-Change 00:32:38.458 Dataset Management (09h): Supported LBA-Change 00:32:38.458 Unknown (0Ch): Supported 00:32:38.458 Unknown (12h): Supported 00:32:38.458 Copy (19h): Supported LBA-Change 00:32:38.458 Unknown (1Dh): Supported LBA-Change 00:32:38.458 00:32:38.458 Error Log 00:32:38.458 ========= 00:32:38.458 00:32:38.458 Arbitration 00:32:38.458 =========== 00:32:38.458 Arbitration Burst: no limit 00:32:38.458 00:32:38.458 Power Management 00:32:38.458 ================ 00:32:38.458 Number of Power States: 1 00:32:38.458 Current Power State: Power State #0 00:32:38.458 Power State #0: 00:32:38.458 Max Power: 25.00 W 00:32:38.458 Non-Operational State: Operational 00:32:38.458 Entry Latency: 16 microseconds 00:32:38.458 Exit Latency: 4 microseconds 00:32:38.458 Relative Read Throughput: 0 00:32:38.458 Relative Read Latency: 0 00:32:38.458 Relative Write Throughput: 0 00:32:38.458 Relative Write Latency: 0 00:32:38.458 Idle Power: Not Reported 00:32:38.458 Active Power: Not Reported 00:32:38.458 Non-Operational Permissive Mode: Not Supported 00:32:38.458 00:32:38.458 Health Information 00:32:38.458 ================== 00:32:38.458 Critical Warnings: 00:32:38.458 Available Spare Space: OK 00:32:38.458 Temperature: [2024-11-20 13:53:35.470639] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64562 terminated unexpected 00:32:38.458 OK 00:32:38.458 Device Reliability: OK 00:32:38.458 Read Only: No 00:32:38.458 Volatile Memory Backup: OK 00:32:38.458 Current Temperature: 323 Kelvin (50 Celsius) 00:32:38.458 Temperature Threshold: 343 Kelvin (70 Celsius) 00:32:38.458 Available Spare: 0% 00:32:38.458 Available Spare Threshold: 0% 00:32:38.458 Life Percentage Used: 0% 00:32:38.458 Data Units Read: 1068 00:32:38.458 Data Units Written: 930 00:32:38.458 Host Read Commands: 47099 00:32:38.458 Host Write Commands: 45792 00:32:38.458 Controller Busy Time: 0 minutes 00:32:38.458 Power Cycles: 0 00:32:38.458 Power On Hours: 0 hours 00:32:38.458 Unsafe Shutdowns: 0 00:32:38.458 Unrecoverable Media Errors: 0 00:32:38.458 Lifetime Error Log Entries: 0 00:32:38.458 Warning Temperature Time: 0 minutes 00:32:38.458 Critical Temperature Time: 0 minutes 00:32:38.458 00:32:38.458 Number of Queues 00:32:38.458 ================ 00:32:38.458 Number of I/O Submission Queues: 64 00:32:38.458 Number of I/O Completion Queues: 64 00:32:38.458 00:32:38.458 ZNS Specific Controller Data 00:32:38.458 ============================ 00:32:38.458 Zone Append Size Limit: 0 00:32:38.458 00:32:38.458 00:32:38.458 Active Namespaces 00:32:38.458 ================= 00:32:38.458 Namespace ID:1 00:32:38.458 Error Recovery Timeout: Unlimited 00:32:38.458 Command Set Identifier: NVM (00h) 00:32:38.459 Deallocate: Supported 00:32:38.459 Deallocated/Unwritten Error: Supported 00:32:38.459 Deallocated Read Value: All 0x00 00:32:38.459 Deallocate in Write Zeroes: Not Supported 00:32:38.459 Deallocated Guard Field: 0xFFFF 00:32:38.459 Flush: Supported 00:32:38.459 Reservation: Not Supported 00:32:38.459 Namespace Sharing Capabilities: Private 00:32:38.459 Size (in LBAs): 1310720 (5GiB) 00:32:38.459 Capacity (in LBAs): 1310720 (5GiB) 00:32:38.459 Utilization (in LBAs): 1310720 (5GiB) 00:32:38.459 Thin Provisioning: Not Supported 00:32:38.459 Per-NS Atomic Units: No 00:32:38.459 Maximum Single Source Range Length: 128 00:32:38.459 Maximum Copy Length: 128 00:32:38.459 Maximum Source Range Count: 128 00:32:38.459 NGUID/EUI64 Never Reused: No 00:32:38.459 Namespace Write Protected: No 00:32:38.459 Number of LBA Formats: 8 00:32:38.459 Current LBA Format: LBA Format #04 00:32:38.459 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:38.459 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:38.459 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:38.459 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:38.459 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:38.459 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:38.459 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:38.459 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:38.459 00:32:38.459 NVM Specific Namespace Data 00:32:38.459 =========================== 00:32:38.459 Logical Block Storage Tag Mask: 0 00:32:38.459 Protection Information Capabilities: 00:32:38.459 16b Guard Protection Information Storage Tag Support: No 00:32:38.459 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:32:38.459 Storage Tag Check Read Support: No 00:32:38.459 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.459 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.459 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.459 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.459 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.459 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.459 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.459 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.459 ===================================================== 00:32:38.459 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:32:38.459 ===================================================== 00:32:38.459 Controller Capabilities/Features 00:32:38.459 ================================ 00:32:38.459 Vendor ID: 1b36 00:32:38.459 Subsystem Vendor ID: 1af4 00:32:38.459 Serial Number: 12343 00:32:38.459 Model Number: QEMU NVMe Ctrl 00:32:38.459 Firmware Version: 8.0.0 00:32:38.459 Recommended Arb Burst: 6 00:32:38.459 IEEE OUI Identifier: 00 54 52 00:32:38.459 Multi-path I/O 00:32:38.459 May have multiple subsystem ports: No 00:32:38.459 May have multiple controllers: Yes 00:32:38.459 Associated with SR-IOV VF: No 00:32:38.459 Max Data Transfer Size: 524288 00:32:38.459 Max Number of Namespaces: 256 00:32:38.459 Max Number of I/O Queues: 64 00:32:38.459 NVMe Specification Version (VS): 1.4 00:32:38.459 NVMe Specification Version (Identify): 1.4 00:32:38.459 Maximum Queue Entries: 2048 00:32:38.459 Contiguous Queues Required: Yes 00:32:38.459 Arbitration Mechanisms Supported 00:32:38.459 Weighted Round Robin: Not Supported 00:32:38.459 Vendor Specific: Not Supported 00:32:38.459 Reset Timeout: 7500 ms 00:32:38.459 Doorbell Stride: 4 bytes 00:32:38.459 NVM Subsystem Reset: Not Supported 00:32:38.459 Command Sets Supported 00:32:38.459 NVM Command Set: Supported 00:32:38.459 Boot Partition: Not Supported 00:32:38.459 Memory Page Size Minimum: 4096 bytes 00:32:38.459 Memory Page Size Maximum: 65536 bytes 00:32:38.459 Persistent Memory Region: Not Supported 00:32:38.459 Optional Asynchronous Events Supported 00:32:38.459 Namespace Attribute Notices: Supported 00:32:38.459 Firmware Activation Notices: Not Supported 00:32:38.459 ANA Change Notices: Not Supported 00:32:38.459 PLE Aggregate Log Change Notices: Not Supported 00:32:38.459 LBA Status Info Alert Notices: Not Supported 00:32:38.459 EGE Aggregate Log Change Notices: Not Supported 00:32:38.459 Normal NVM Subsystem Shutdown event: Not Supported 00:32:38.459 Zone Descriptor Change Notices: Not Supported 00:32:38.459 Discovery Log Change Notices: Not Supported 00:32:38.459 Controller Attributes 00:32:38.459 128-bit Host Identifier: Not Supported 00:32:38.459 Non-Operational Permissive Mode: Not Supported 00:32:38.459 NVM Sets: Not Supported 00:32:38.459 Read Recovery Levels: Not Supported 00:32:38.459 Endurance Groups: Supported 00:32:38.459 Predictable Latency Mode: Not Supported 00:32:38.459 Traffic Based Keep ALive: Not Supported 00:32:38.459 Namespace Granularity: Not Supported 00:32:38.459 SQ Associations: Not Supported 00:32:38.459 UUID List: Not Supported 00:32:38.459 Multi-Domain Subsystem: Not Supported 00:32:38.459 Fixed Capacity Management: Not Supported 00:32:38.459 Variable Capacity Management: Not Supported 00:32:38.459 Delete Endurance Group: Not Supported 00:32:38.459 Delete NVM Set: Not Supported 00:32:38.459 Extended LBA Formats Supported: Supported 00:32:38.459 Flexible Data Placement Supported: Supported 00:32:38.459 00:32:38.459 Controller Memory Buffer Support 00:32:38.459 ================================ 00:32:38.459 Supported: No 00:32:38.459 00:32:38.459 Persistent Memory Region Support 00:32:38.459 ================================ 00:32:38.459 Supported: No 00:32:38.459 00:32:38.459 Admin Command Set Attributes 00:32:38.459 ============================ 00:32:38.459 Security Send/Receive: Not Supported 00:32:38.459 Format NVM: Supported 00:32:38.459 Firmware Activate/Download: Not Supported 00:32:38.459 Namespace Management: Supported 00:32:38.459 Device Self-Test: Not Supported 00:32:38.459 Directives: Supported 00:32:38.459 NVMe-MI: Not Supported 00:32:38.459 Virtualization Management: Not Supported 00:32:38.459 Doorbell Buffer Config: Supported 00:32:38.459 Get LBA Status Capability: Not Supported 00:32:38.459 Command & Feature Lockdown Capability: Not Supported 00:32:38.459 Abort Command Limit: 4 00:32:38.459 Async Event Request Limit: 4 00:32:38.459 Number of Firmware Slots: N/A 00:32:38.459 Firmware Slot 1 Read-Only: N/A 00:32:38.459 Firmware Activation Without Reset: N/A 00:32:38.459 Multiple Update Detection Support: N/A 00:32:38.459 Firmware Update Granularity: No Information Provided 00:32:38.459 Per-Namespace SMART Log: Yes 00:32:38.459 Asymmetric Namespace Access Log Page: Not Supported 00:32:38.459 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:32:38.459 Command Effects Log Page: Supported 00:32:38.459 Get Log Page Extended Data: Supported 00:32:38.459 Telemetry Log Pages: Not Supported 00:32:38.459 Persistent Event Log Pages: Not Supported 00:32:38.459 Supported Log Pages Log Page: May Support 00:32:38.459 Commands Supported & Effects Log Page: Not Supported 00:32:38.459 Feature Identifiers & Effects Log Page:May Support 00:32:38.459 NVMe-MI Commands & Effects Log Page: May Support 00:32:38.459 Data Area 4 for Telemetry Log: Not Supported 00:32:38.459 Error Log Page Entries Supported: 1 00:32:38.459 Keep Alive: Not Supported 00:32:38.459 00:32:38.459 NVM Command Set Attributes 00:32:38.459 ========================== 00:32:38.459 Submission Queue Entry Size 00:32:38.459 Max: 64 00:32:38.459 Min: 64 00:32:38.459 Completion Queue Entry Size 00:32:38.459 Max: 16 00:32:38.459 Min: 16 00:32:38.459 Number of Namespaces: 256 00:32:38.459 Compare Command: Supported 00:32:38.459 Write Uncorrectable Command: Not Supported 00:32:38.459 Dataset Management Command: Supported 00:32:38.459 Write Zeroes Command: Supported 00:32:38.459 Set Features Save Field: Supported 00:32:38.459 Reservations: Not Supported 00:32:38.459 Timestamp: Supported 00:32:38.459 Copy: Supported 00:32:38.459 Volatile Write Cache: Present 00:32:38.459 Atomic Write Unit (Normal): 1 00:32:38.459 Atomic Write Unit (PFail): 1 00:32:38.459 Atomic Compare & Write Unit: 1 00:32:38.459 Fused Compare & Write: Not Supported 00:32:38.459 Scatter-Gather List 00:32:38.459 SGL Command Set: Supported 00:32:38.459 SGL Keyed: Not Supported 00:32:38.459 SGL Bit Bucket Descriptor: Not Supported 00:32:38.459 SGL Metadata Pointer: Not Supported 00:32:38.459 Oversized SGL: Not Supported 00:32:38.459 SGL Metadata Address: Not Supported 00:32:38.459 SGL Offset: Not Supported 00:32:38.459 Transport SGL Data Block: Not Supported 00:32:38.459 Replay Protected Memory Block: Not Supported 00:32:38.459 00:32:38.459 Firmware Slot Information 00:32:38.459 ========================= 00:32:38.459 Active slot: 1 00:32:38.459 Slot 1 Firmware Revision: 1.0 00:32:38.459 00:32:38.459 00:32:38.459 Commands Supported and Effects 00:32:38.460 ============================== 00:32:38.460 Admin Commands 00:32:38.460 -------------- 00:32:38.460 Delete I/O Submission Queue (00h): Supported 00:32:38.460 Create I/O Submission Queue (01h): Supported 00:32:38.460 Get Log Page (02h): Supported 00:32:38.460 Delete I/O Completion Queue (04h): Supported 00:32:38.460 Create I/O Completion Queue (05h): Supported 00:32:38.460 Identify (06h): Supported 00:32:38.460 Abort (08h): Supported 00:32:38.460 Set Features (09h): Supported 00:32:38.460 Get Features (0Ah): Supported 00:32:38.460 Asynchronous Event Request (0Ch): Supported 00:32:38.460 Namespace Attachment (15h): Supported NS-Inventory-Change 00:32:38.460 Directive Send (19h): Supported 00:32:38.460 Directive Receive (1Ah): Supported 00:32:38.460 Virtualization Management (1Ch): Supported 00:32:38.460 Doorbell Buffer Config (7Ch): Supported 00:32:38.460 Format NVM (80h): Supported LBA-Change 00:32:38.460 I/O Commands 00:32:38.460 ------------ 00:32:38.460 Flush (00h): Supported LBA-Change 00:32:38.460 Write (01h): Supported LBA-Change 00:32:38.460 Read (02h): Supported 00:32:38.460 Compare (05h): Supported 00:32:38.460 Write Zeroes (08h): Supported LBA-Change 00:32:38.460 Dataset Management (09h): Supported LBA-Change 00:32:38.460 Unknown (0Ch): Supported 00:32:38.460 Unknown (12h): Supported 00:32:38.460 Copy (19h): Supported LBA-Change 00:32:38.460 Unknown (1Dh): Supported LBA-Change 00:32:38.460 00:32:38.460 Error Log 00:32:38.460 ========= 00:32:38.460 00:32:38.460 Arbitration 00:32:38.460 =========== 00:32:38.460 Arbitration Burst: no limit 00:32:38.460 00:32:38.460 Power Management 00:32:38.460 ================ 00:32:38.460 Number of Power States: 1 00:32:38.460 Current Power State: Power State #0 00:32:38.460 Power State #0: 00:32:38.460 Max Power: 25.00 W 00:32:38.460 Non-Operational State: Operational 00:32:38.460 Entry Latency: 16 microseconds 00:32:38.460 Exit Latency: 4 microseconds 00:32:38.460 Relative Read Throughput: 0 00:32:38.460 Relative Read Latency: 0 00:32:38.460 Relative Write Throughput: 0 00:32:38.460 Relative Write Latency: 0 00:32:38.460 Idle Power: Not Reported 00:32:38.460 Active Power: Not Reported 00:32:38.460 Non-Operational Permissive Mode: Not Supported 00:32:38.460 00:32:38.460 Health Information 00:32:38.460 ================== 00:32:38.460 Critical Warnings: 00:32:38.460 Available Spare Space: OK 00:32:38.460 Temperature: OK 00:32:38.460 Device Reliability: OK 00:32:38.460 Read Only: No 00:32:38.460 Volatile Memory Backup: OK 00:32:38.460 Current Temperature: 323 Kelvin (50 Celsius) 00:32:38.460 Temperature Threshold: 343 Kelvin (70 Celsius) 00:32:38.460 Available Spare: 0% 00:32:38.460 Available Spare Threshold: 0% 00:32:38.460 Life Percentage Used: 0% 00:32:38.460 Data Units Read: 747 00:32:38.460 Data Units Written: 676 00:32:38.460 Host Read Commands: 31891 00:32:38.460 Host Write Commands: 31314 00:32:38.460 Controller Busy Time: 0 minutes 00:32:38.460 Power Cycles: 0 00:32:38.460 Power On Hours: 0 hours 00:32:38.460 Unsafe Shutdowns: 0 00:32:38.460 Unrecoverable Media Errors: 0 00:32:38.460 Lifetime Error Log Entries: 0 00:32:38.460 Warning Temperature Time: 0 minutes 00:32:38.460 Critical Temperature Time: 0 minutes 00:32:38.460 00:32:38.460 Number of Queues 00:32:38.460 ================ 00:32:38.460 Number of I/O Submission Queues: 64 00:32:38.460 Number of I/O Completion Queues: 64 00:32:38.460 00:32:38.460 ZNS Specific Controller Data 00:32:38.460 ============================ 00:32:38.460 Zone Append Size Limit: 0 00:32:38.460 00:32:38.460 00:32:38.460 Active Namespaces 00:32:38.460 ================= 00:32:38.460 Namespace ID:1 00:32:38.460 Error Recovery Timeout: Unlimited 00:32:38.460 Command Set Identifier: NVM (00h) 00:32:38.460 Deallocate: Supported 00:32:38.460 Deallocated/Unwritten Error: Supported 00:32:38.460 Deallocated Read Value: All 0x00 00:32:38.460 Deallocate in Write Zeroes: Not Supported 00:32:38.460 Deallocated Guard Field: 0xFFFF 00:32:38.460 Flush: Supported 00:32:38.460 Reservation: Not Supported 00:32:38.460 Namespace Sharing Capabilities: Multiple Controllers 00:32:38.460 Size (in LBAs): 262144 (1GiB) 00:32:38.460 Capacity (in LBAs): 262144 (1GiB) 00:32:38.460 Utilization (in LBAs): 262144 (1GiB) 00:32:38.460 Thin Provisioning: Not Supported 00:32:38.460 Per-NS Atomic Units: No 00:32:38.460 Maximum Single Source Range Length: 128 00:32:38.460 Maximum Copy Length: 128 00:32:38.460 Maximum Source Range Count: 128 00:32:38.460 NGUID/EUI64 Never Reused: No 00:32:38.460 Namespace Write Protected: No 00:32:38.460 Endurance group ID: 1 00:32:38.460 Number of LBA Formats: 8 00:32:38.460 Current LBA Format: LBA Format #04 00:32:38.460 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:38.460 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:38.460 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:38.460 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:38.460 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:38.460 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:38.460 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:38.460 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:38.460 00:32:38.460 Get Feature FDP: 00:32:38.460 ================ 00:32:38.460 Enabled: Yes 00:32:38.460 FDP configuration index: 0 00:32:38.460 00:32:38.460 FDP configurations log page 00:32:38.460 =========================== 00:32:38.460 Number of FDP configurations: 1 00:32:38.460 Version: 0 00:32:38.460 Size: 112 00:32:38.460 FDP Configuration Descriptor: 0 00:32:38.460 Descriptor Size: 96 00:32:38.460 Reclaim Group Identifier format: 2 00:32:38.460 FDP Volatile Write Cache: Not Present 00:32:38.460 FDP Configuration: Valid 00:32:38.460 Vendor Specific Size: 0 00:32:38.460 Number of Reclaim Groups: 2 00:32:38.460 Number of Recalim Unit Handles: 8 00:32:38.460 Max Placement Identifiers: 128 00:32:38.460 Number of Namespaces Suppprted: 256 00:32:38.460 Reclaim unit Nominal Size: 6000000 bytes 00:32:38.460 Estimated Reclaim Unit Time Limit: Not Reported 00:32:38.460 RUH Desc #000: RUH Type: Initially Isolated 00:32:38.460 RUH Desc #001: RUH Type: Initially Isolated 00:32:38.460 RUH Desc #002: RUH Type: Initially Isolated 00:32:38.460 RUH Desc #003: RUH Type: Initially Isolated 00:32:38.460 RUH Desc #004: RUH Type: Initially Isolated 00:32:38.460 RUH Desc #005: RUH Type: Initially Isolated 00:32:38.460 RUH Desc #006: RUH Type: Initially Isolated 00:32:38.460 RUH Desc #007: RUH Type: Initially Isolated 00:32:38.460 00:32:38.460 FDP reclaim unit handle usage log page 00:32:38.460 ====================================== 00:32:38.460 Number of Reclaim Unit Handles: 8 00:32:38.460 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:32:38.460 RUH Usage Desc #001: RUH Attributes: Unused 00:32:38.460 RUH Usage Desc #002: RUH Attributes: Unused 00:32:38.460 RUH Usage Desc #003: RUH Attributes: Unused 00:32:38.460 RUH Usage Desc #004: RUH Attributes: Unused 00:32:38.460 RUH Usage Desc #005: RUH Attributes: Unused 00:32:38.460 RUH Usage Desc #006: RUH Attributes: Unused 00:32:38.460 RUH Usage Desc #007: RUH Attributes: Unused 00:32:38.460 00:32:38.460 FDP statistics log page 00:32:38.460 ======================= 00:32:38.460 Host bytes with metadata written: 428318720 00:32:38.460 Medi[2024-11-20 13:53:35.472538] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64562 terminated unexpected 00:32:38.460 a bytes with metadata written: 428363776 00:32:38.460 Media bytes erased: 0 00:32:38.460 00:32:38.460 FDP events log page 00:32:38.460 =================== 00:32:38.460 Number of FDP events: 0 00:32:38.460 00:32:38.460 NVM Specific Namespace Data 00:32:38.460 =========================== 00:32:38.460 Logical Block Storage Tag Mask: 0 00:32:38.460 Protection Information Capabilities: 00:32:38.460 16b Guard Protection Information Storage Tag Support: No 00:32:38.460 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:32:38.460 Storage Tag Check Read Support: No 00:32:38.460 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.460 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.460 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.460 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.460 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.460 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.460 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.460 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.460 ===================================================== 00:32:38.460 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:32:38.460 ===================================================== 00:32:38.460 Controller Capabilities/Features 00:32:38.460 ================================ 00:32:38.461 Vendor ID: 1b36 00:32:38.461 Subsystem Vendor ID: 1af4 00:32:38.461 Serial Number: 12342 00:32:38.461 Model Number: QEMU NVMe Ctrl 00:32:38.461 Firmware Version: 8.0.0 00:32:38.461 Recommended Arb Burst: 6 00:32:38.461 IEEE OUI Identifier: 00 54 52 00:32:38.461 Multi-path I/O 00:32:38.461 May have multiple subsystem ports: No 00:32:38.461 May have multiple controllers: No 00:32:38.461 Associated with SR-IOV VF: No 00:32:38.461 Max Data Transfer Size: 524288 00:32:38.461 Max Number of Namespaces: 256 00:32:38.461 Max Number of I/O Queues: 64 00:32:38.461 NVMe Specification Version (VS): 1.4 00:32:38.461 NVMe Specification Version (Identify): 1.4 00:32:38.461 Maximum Queue Entries: 2048 00:32:38.461 Contiguous Queues Required: Yes 00:32:38.461 Arbitration Mechanisms Supported 00:32:38.461 Weighted Round Robin: Not Supported 00:32:38.461 Vendor Specific: Not Supported 00:32:38.461 Reset Timeout: 7500 ms 00:32:38.461 Doorbell Stride: 4 bytes 00:32:38.461 NVM Subsystem Reset: Not Supported 00:32:38.461 Command Sets Supported 00:32:38.461 NVM Command Set: Supported 00:32:38.461 Boot Partition: Not Supported 00:32:38.461 Memory Page Size Minimum: 4096 bytes 00:32:38.461 Memory Page Size Maximum: 65536 bytes 00:32:38.461 Persistent Memory Region: Not Supported 00:32:38.461 Optional Asynchronous Events Supported 00:32:38.461 Namespace Attribute Notices: Supported 00:32:38.461 Firmware Activation Notices: Not Supported 00:32:38.461 ANA Change Notices: Not Supported 00:32:38.461 PLE Aggregate Log Change Notices: Not Supported 00:32:38.461 LBA Status Info Alert Notices: Not Supported 00:32:38.461 EGE Aggregate Log Change Notices: Not Supported 00:32:38.461 Normal NVM Subsystem Shutdown event: Not Supported 00:32:38.461 Zone Descriptor Change Notices: Not Supported 00:32:38.461 Discovery Log Change Notices: Not Supported 00:32:38.461 Controller Attributes 00:32:38.461 128-bit Host Identifier: Not Supported 00:32:38.461 Non-Operational Permissive Mode: Not Supported 00:32:38.461 NVM Sets: Not Supported 00:32:38.461 Read Recovery Levels: Not Supported 00:32:38.461 Endurance Groups: Not Supported 00:32:38.461 Predictable Latency Mode: Not Supported 00:32:38.461 Traffic Based Keep ALive: Not Supported 00:32:38.461 Namespace Granularity: Not Supported 00:32:38.461 SQ Associations: Not Supported 00:32:38.461 UUID List: Not Supported 00:32:38.461 Multi-Domain Subsystem: Not Supported 00:32:38.461 Fixed Capacity Management: Not Supported 00:32:38.461 Variable Capacity Management: Not Supported 00:32:38.461 Delete Endurance Group: Not Supported 00:32:38.461 Delete NVM Set: Not Supported 00:32:38.461 Extended LBA Formats Supported: Supported 00:32:38.461 Flexible Data Placement Supported: Not Supported 00:32:38.461 00:32:38.461 Controller Memory Buffer Support 00:32:38.461 ================================ 00:32:38.461 Supported: No 00:32:38.461 00:32:38.461 Persistent Memory Region Support 00:32:38.461 ================================ 00:32:38.461 Supported: No 00:32:38.461 00:32:38.461 Admin Command Set Attributes 00:32:38.461 ============================ 00:32:38.461 Security Send/Receive: Not Supported 00:32:38.461 Format NVM: Supported 00:32:38.461 Firmware Activate/Download: Not Supported 00:32:38.461 Namespace Management: Supported 00:32:38.461 Device Self-Test: Not Supported 00:32:38.461 Directives: Supported 00:32:38.461 NVMe-MI: Not Supported 00:32:38.461 Virtualization Management: Not Supported 00:32:38.461 Doorbell Buffer Config: Supported 00:32:38.461 Get LBA Status Capability: Not Supported 00:32:38.461 Command & Feature Lockdown Capability: Not Supported 00:32:38.461 Abort Command Limit: 4 00:32:38.461 Async Event Request Limit: 4 00:32:38.461 Number of Firmware Slots: N/A 00:32:38.461 Firmware Slot 1 Read-Only: N/A 00:32:38.461 Firmware Activation Without Reset: N/A 00:32:38.461 Multiple Update Detection Support: N/A 00:32:38.461 Firmware Update Granularity: No Information Provided 00:32:38.461 Per-Namespace SMART Log: Yes 00:32:38.461 Asymmetric Namespace Access Log Page: Not Supported 00:32:38.461 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:32:38.461 Command Effects Log Page: Supported 00:32:38.461 Get Log Page Extended Data: Supported 00:32:38.461 Telemetry Log Pages: Not Supported 00:32:38.461 Persistent Event Log Pages: Not Supported 00:32:38.461 Supported Log Pages Log Page: May Support 00:32:38.461 Commands Supported & Effects Log Page: Not Supported 00:32:38.461 Feature Identifiers & Effects Log Page:May Support 00:32:38.461 NVMe-MI Commands & Effects Log Page: May Support 00:32:38.461 Data Area 4 for Telemetry Log: Not Supported 00:32:38.461 Error Log Page Entries Supported: 1 00:32:38.461 Keep Alive: Not Supported 00:32:38.461 00:32:38.461 NVM Command Set Attributes 00:32:38.461 ========================== 00:32:38.461 Submission Queue Entry Size 00:32:38.461 Max: 64 00:32:38.461 Min: 64 00:32:38.461 Completion Queue Entry Size 00:32:38.461 Max: 16 00:32:38.461 Min: 16 00:32:38.461 Number of Namespaces: 256 00:32:38.461 Compare Command: Supported 00:32:38.461 Write Uncorrectable Command: Not Supported 00:32:38.461 Dataset Management Command: Supported 00:32:38.461 Write Zeroes Command: Supported 00:32:38.461 Set Features Save Field: Supported 00:32:38.461 Reservations: Not Supported 00:32:38.461 Timestamp: Supported 00:32:38.461 Copy: Supported 00:32:38.461 Volatile Write Cache: Present 00:32:38.461 Atomic Write Unit (Normal): 1 00:32:38.461 Atomic Write Unit (PFail): 1 00:32:38.461 Atomic Compare & Write Unit: 1 00:32:38.461 Fused Compare & Write: Not Supported 00:32:38.461 Scatter-Gather List 00:32:38.461 SGL Command Set: Supported 00:32:38.461 SGL Keyed: Not Supported 00:32:38.461 SGL Bit Bucket Descriptor: Not Supported 00:32:38.461 SGL Metadata Pointer: Not Supported 00:32:38.461 Oversized SGL: Not Supported 00:32:38.461 SGL Metadata Address: Not Supported 00:32:38.461 SGL Offset: Not Supported 00:32:38.461 Transport SGL Data Block: Not Supported 00:32:38.461 Replay Protected Memory Block: Not Supported 00:32:38.461 00:32:38.461 Firmware Slot Information 00:32:38.461 ========================= 00:32:38.461 Active slot: 1 00:32:38.461 Slot 1 Firmware Revision: 1.0 00:32:38.461 00:32:38.461 00:32:38.461 Commands Supported and Effects 00:32:38.461 ============================== 00:32:38.461 Admin Commands 00:32:38.461 -------------- 00:32:38.461 Delete I/O Submission Queue (00h): Supported 00:32:38.461 Create I/O Submission Queue (01h): Supported 00:32:38.461 Get Log Page (02h): Supported 00:32:38.461 Delete I/O Completion Queue (04h): Supported 00:32:38.461 Create I/O Completion Queue (05h): Supported 00:32:38.461 Identify (06h): Supported 00:32:38.461 Abort (08h): Supported 00:32:38.461 Set Features (09h): Supported 00:32:38.461 Get Features (0Ah): Supported 00:32:38.461 Asynchronous Event Request (0Ch): Supported 00:32:38.461 Namespace Attachment (15h): Supported NS-Inventory-Change 00:32:38.461 Directive Send (19h): Supported 00:32:38.461 Directive Receive (1Ah): Supported 00:32:38.461 Virtualization Management (1Ch): Supported 00:32:38.461 Doorbell Buffer Config (7Ch): Supported 00:32:38.461 Format NVM (80h): Supported LBA-Change 00:32:38.461 I/O Commands 00:32:38.461 ------------ 00:32:38.461 Flush (00h): Supported LBA-Change 00:32:38.461 Write (01h): Supported LBA-Change 00:32:38.462 Read (02h): Supported 00:32:38.462 Compare (05h): Supported 00:32:38.462 Write Zeroes (08h): Supported LBA-Change 00:32:38.462 Dataset Management (09h): Supported LBA-Change 00:32:38.462 Unknown (0Ch): Supported 00:32:38.462 Unknown (12h): Supported 00:32:38.462 Copy (19h): Supported LBA-Change 00:32:38.462 Unknown (1Dh): Supported LBA-Change 00:32:38.462 00:32:38.462 Error Log 00:32:38.462 ========= 00:32:38.462 00:32:38.462 Arbitration 00:32:38.462 =========== 00:32:38.462 Arbitration Burst: no limit 00:32:38.462 00:32:38.462 Power Management 00:32:38.462 ================ 00:32:38.462 Number of Power States: 1 00:32:38.462 Current Power State: Power State #0 00:32:38.462 Power State #0: 00:32:38.462 Max Power: 25.00 W 00:32:38.462 Non-Operational State: Operational 00:32:38.462 Entry Latency: 16 microseconds 00:32:38.462 Exit Latency: 4 microseconds 00:32:38.462 Relative Read Throughput: 0 00:32:38.462 Relative Read Latency: 0 00:32:38.462 Relative Write Throughput: 0 00:32:38.462 Relative Write Latency: 0 00:32:38.462 Idle Power: Not Reported 00:32:38.462 Active Power: Not Reported 00:32:38.462 Non-Operational Permissive Mode: Not Supported 00:32:38.462 00:32:38.462 Health Information 00:32:38.462 ================== 00:32:38.462 Critical Warnings: 00:32:38.462 Available Spare Space: OK 00:32:38.462 Temperature: OK 00:32:38.462 Device Reliability: OK 00:32:38.462 Read Only: No 00:32:38.462 Volatile Memory Backup: OK 00:32:38.462 Current Temperature: 323 Kelvin (50 Celsius) 00:32:38.462 Temperature Threshold: 343 Kelvin (70 Celsius) 00:32:38.462 Available Spare: 0% 00:32:38.462 Available Spare Threshold: 0% 00:32:38.462 Life Percentage Used: 0% 00:32:38.462 Data Units Read: 2110 00:32:38.462 Data Units Written: 1897 00:32:38.462 Host Read Commands: 94605 00:32:38.462 Host Write Commands: 92875 00:32:38.462 Controller Busy Time: 0 minutes 00:32:38.462 Power Cycles: 0 00:32:38.462 Power On Hours: 0 hours 00:32:38.462 Unsafe Shutdowns: 0 00:32:38.462 Unrecoverable Media Errors: 0 00:32:38.462 Lifetime Error Log Entries: 0 00:32:38.462 Warning Temperature Time: 0 minutes 00:32:38.462 Critical Temperature Time: 0 minutes 00:32:38.462 00:32:38.462 Number of Queues 00:32:38.462 ================ 00:32:38.462 Number of I/O Submission Queues: 64 00:32:38.462 Number of I/O Completion Queues: 64 00:32:38.462 00:32:38.462 ZNS Specific Controller Data 00:32:38.462 ============================ 00:32:38.462 Zone Append Size Limit: 0 00:32:38.462 00:32:38.462 00:32:38.462 Active Namespaces 00:32:38.462 ================= 00:32:38.462 Namespace ID:1 00:32:38.462 Error Recovery Timeout: Unlimited 00:32:38.462 Command Set Identifier: NVM (00h) 00:32:38.462 Deallocate: Supported 00:32:38.462 Deallocated/Unwritten Error: Supported 00:32:38.462 Deallocated Read Value: All 0x00 00:32:38.462 Deallocate in Write Zeroes: Not Supported 00:32:38.462 Deallocated Guard Field: 0xFFFF 00:32:38.462 Flush: Supported 00:32:38.462 Reservation: Not Supported 00:32:38.462 Namespace Sharing Capabilities: Private 00:32:38.462 Size (in LBAs): 1048576 (4GiB) 00:32:38.462 Capacity (in LBAs): 1048576 (4GiB) 00:32:38.462 Utilization (in LBAs): 1048576 (4GiB) 00:32:38.462 Thin Provisioning: Not Supported 00:32:38.462 Per-NS Atomic Units: No 00:32:38.462 Maximum Single Source Range Length: 128 00:32:38.462 Maximum Copy Length: 128 00:32:38.462 Maximum Source Range Count: 128 00:32:38.462 NGUID/EUI64 Never Reused: No 00:32:38.462 Namespace Write Protected: No 00:32:38.462 Number of LBA Formats: 8 00:32:38.462 Current LBA Format: LBA Format #04 00:32:38.462 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:38.462 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:38.462 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:38.462 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:38.462 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:38.462 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:38.462 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:38.462 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:38.462 00:32:38.462 NVM Specific Namespace Data 00:32:38.462 =========================== 00:32:38.462 Logical Block Storage Tag Mask: 0 00:32:38.462 Protection Information Capabilities: 00:32:38.462 16b Guard Protection Information Storage Tag Support: No 00:32:38.462 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:32:38.462 Storage Tag Check Read Support: No 00:32:38.462 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Namespace ID:2 00:32:38.462 Error Recovery Timeout: Unlimited 00:32:38.462 Command Set Identifier: NVM (00h) 00:32:38.462 Deallocate: Supported 00:32:38.462 Deallocated/Unwritten Error: Supported 00:32:38.462 Deallocated Read Value: All 0x00 00:32:38.462 Deallocate in Write Zeroes: Not Supported 00:32:38.462 Deallocated Guard Field: 0xFFFF 00:32:38.462 Flush: Supported 00:32:38.462 Reservation: Not Supported 00:32:38.462 Namespace Sharing Capabilities: Private 00:32:38.462 Size (in LBAs): 1048576 (4GiB) 00:32:38.462 Capacity (in LBAs): 1048576 (4GiB) 00:32:38.462 Utilization (in LBAs): 1048576 (4GiB) 00:32:38.462 Thin Provisioning: Not Supported 00:32:38.462 Per-NS Atomic Units: No 00:32:38.462 Maximum Single Source Range Length: 128 00:32:38.462 Maximum Copy Length: 128 00:32:38.462 Maximum Source Range Count: 128 00:32:38.462 NGUID/EUI64 Never Reused: No 00:32:38.462 Namespace Write Protected: No 00:32:38.462 Number of LBA Formats: 8 00:32:38.462 Current LBA Format: LBA Format #04 00:32:38.462 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:38.462 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:38.462 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:38.462 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:38.462 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:38.462 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:38.462 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:38.462 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:38.462 00:32:38.462 NVM Specific Namespace Data 00:32:38.462 =========================== 00:32:38.462 Logical Block Storage Tag Mask: 0 00:32:38.462 Protection Information Capabilities: 00:32:38.462 16b Guard Protection Information Storage Tag Support: No 00:32:38.462 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:32:38.462 Storage Tag Check Read Support: No 00:32:38.462 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.462 Namespace ID:3 00:32:38.462 Error Recovery Timeout: Unlimited 00:32:38.462 Command Set Identifier: NVM (00h) 00:32:38.462 Deallocate: Supported 00:32:38.462 Deallocated/Unwritten Error: Supported 00:32:38.462 Deallocated Read Value: All 0x00 00:32:38.462 Deallocate in Write Zeroes: Not Supported 00:32:38.462 Deallocated Guard Field: 0xFFFF 00:32:38.462 Flush: Supported 00:32:38.462 Reservation: Not Supported 00:32:38.462 Namespace Sharing Capabilities: Private 00:32:38.462 Size (in LBAs): 1048576 (4GiB) 00:32:38.462 Capacity (in LBAs): 1048576 (4GiB) 00:32:38.462 Utilization (in LBAs): 1048576 (4GiB) 00:32:38.462 Thin Provisioning: Not Supported 00:32:38.462 Per-NS Atomic Units: No 00:32:38.462 Maximum Single Source Range Length: 128 00:32:38.462 Maximum Copy Length: 128 00:32:38.462 Maximum Source Range Count: 128 00:32:38.462 NGUID/EUI64 Never Reused: No 00:32:38.462 Namespace Write Protected: No 00:32:38.462 Number of LBA Formats: 8 00:32:38.462 Current LBA Format: LBA Format #04 00:32:38.462 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:38.462 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:38.462 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:38.463 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:38.463 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:38.463 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:38.463 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:38.463 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:38.463 00:32:38.463 NVM Specific Namespace Data 00:32:38.463 =========================== 00:32:38.463 Logical Block Storage Tag Mask: 0 00:32:38.463 Protection Information Capabilities: 00:32:38.463 16b Guard Protection Information Storage Tag Support: No 00:32:38.463 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:32:38.463 Storage Tag Check Read Support: No 00:32:38.463 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.463 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.463 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.463 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.463 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.463 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.463 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.463 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.463 13:53:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:32:38.463 13:53:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:32:38.723 ===================================================== 00:32:38.723 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:38.723 ===================================================== 00:32:38.723 Controller Capabilities/Features 00:32:38.723 ================================ 00:32:38.723 Vendor ID: 1b36 00:32:38.723 Subsystem Vendor ID: 1af4 00:32:38.723 Serial Number: 12340 00:32:38.723 Model Number: QEMU NVMe Ctrl 00:32:38.723 Firmware Version: 8.0.0 00:32:38.723 Recommended Arb Burst: 6 00:32:38.723 IEEE OUI Identifier: 00 54 52 00:32:38.723 Multi-path I/O 00:32:38.723 May have multiple subsystem ports: No 00:32:38.723 May have multiple controllers: No 00:32:38.723 Associated with SR-IOV VF: No 00:32:38.723 Max Data Transfer Size: 524288 00:32:38.723 Max Number of Namespaces: 256 00:32:38.723 Max Number of I/O Queues: 64 00:32:38.723 NVMe Specification Version (VS): 1.4 00:32:38.723 NVMe Specification Version (Identify): 1.4 00:32:38.723 Maximum Queue Entries: 2048 00:32:38.723 Contiguous Queues Required: Yes 00:32:38.723 Arbitration Mechanisms Supported 00:32:38.723 Weighted Round Robin: Not Supported 00:32:38.723 Vendor Specific: Not Supported 00:32:38.723 Reset Timeout: 7500 ms 00:32:38.723 Doorbell Stride: 4 bytes 00:32:38.723 NVM Subsystem Reset: Not Supported 00:32:38.723 Command Sets Supported 00:32:38.723 NVM Command Set: Supported 00:32:38.723 Boot Partition: Not Supported 00:32:38.723 Memory Page Size Minimum: 4096 bytes 00:32:38.723 Memory Page Size Maximum: 65536 bytes 00:32:38.723 Persistent Memory Region: Not Supported 00:32:38.723 Optional Asynchronous Events Supported 00:32:38.723 Namespace Attribute Notices: Supported 00:32:38.723 Firmware Activation Notices: Not Supported 00:32:38.723 ANA Change Notices: Not Supported 00:32:38.723 PLE Aggregate Log Change Notices: Not Supported 00:32:38.723 LBA Status Info Alert Notices: Not Supported 00:32:38.723 EGE Aggregate Log Change Notices: Not Supported 00:32:38.723 Normal NVM Subsystem Shutdown event: Not Supported 00:32:38.723 Zone Descriptor Change Notices: Not Supported 00:32:38.723 Discovery Log Change Notices: Not Supported 00:32:38.723 Controller Attributes 00:32:38.723 128-bit Host Identifier: Not Supported 00:32:38.723 Non-Operational Permissive Mode: Not Supported 00:32:38.723 NVM Sets: Not Supported 00:32:38.723 Read Recovery Levels: Not Supported 00:32:38.723 Endurance Groups: Not Supported 00:32:38.723 Predictable Latency Mode: Not Supported 00:32:38.723 Traffic Based Keep ALive: Not Supported 00:32:38.723 Namespace Granularity: Not Supported 00:32:38.723 SQ Associations: Not Supported 00:32:38.723 UUID List: Not Supported 00:32:38.723 Multi-Domain Subsystem: Not Supported 00:32:38.723 Fixed Capacity Management: Not Supported 00:32:38.723 Variable Capacity Management: Not Supported 00:32:38.723 Delete Endurance Group: Not Supported 00:32:38.723 Delete NVM Set: Not Supported 00:32:38.723 Extended LBA Formats Supported: Supported 00:32:38.723 Flexible Data Placement Supported: Not Supported 00:32:38.723 00:32:38.723 Controller Memory Buffer Support 00:32:38.723 ================================ 00:32:38.723 Supported: No 00:32:38.723 00:32:38.723 Persistent Memory Region Support 00:32:38.723 ================================ 00:32:38.723 Supported: No 00:32:38.723 00:32:38.723 Admin Command Set Attributes 00:32:38.723 ============================ 00:32:38.723 Security Send/Receive: Not Supported 00:32:38.723 Format NVM: Supported 00:32:38.723 Firmware Activate/Download: Not Supported 00:32:38.723 Namespace Management: Supported 00:32:38.723 Device Self-Test: Not Supported 00:32:38.723 Directives: Supported 00:32:38.723 NVMe-MI: Not Supported 00:32:38.723 Virtualization Management: Not Supported 00:32:38.723 Doorbell Buffer Config: Supported 00:32:38.723 Get LBA Status Capability: Not Supported 00:32:38.723 Command & Feature Lockdown Capability: Not Supported 00:32:38.723 Abort Command Limit: 4 00:32:38.723 Async Event Request Limit: 4 00:32:38.723 Number of Firmware Slots: N/A 00:32:38.723 Firmware Slot 1 Read-Only: N/A 00:32:38.723 Firmware Activation Without Reset: N/A 00:32:38.723 Multiple Update Detection Support: N/A 00:32:38.723 Firmware Update Granularity: No Information Provided 00:32:38.723 Per-Namespace SMART Log: Yes 00:32:38.723 Asymmetric Namespace Access Log Page: Not Supported 00:32:38.723 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:32:38.723 Command Effects Log Page: Supported 00:32:38.723 Get Log Page Extended Data: Supported 00:32:38.723 Telemetry Log Pages: Not Supported 00:32:38.723 Persistent Event Log Pages: Not Supported 00:32:38.723 Supported Log Pages Log Page: May Support 00:32:38.723 Commands Supported & Effects Log Page: Not Supported 00:32:38.723 Feature Identifiers & Effects Log Page:May Support 00:32:38.723 NVMe-MI Commands & Effects Log Page: May Support 00:32:38.723 Data Area 4 for Telemetry Log: Not Supported 00:32:38.723 Error Log Page Entries Supported: 1 00:32:38.723 Keep Alive: Not Supported 00:32:38.723 00:32:38.723 NVM Command Set Attributes 00:32:38.723 ========================== 00:32:38.723 Submission Queue Entry Size 00:32:38.723 Max: 64 00:32:38.723 Min: 64 00:32:38.723 Completion Queue Entry Size 00:32:38.723 Max: 16 00:32:38.723 Min: 16 00:32:38.723 Number of Namespaces: 256 00:32:38.723 Compare Command: Supported 00:32:38.723 Write Uncorrectable Command: Not Supported 00:32:38.723 Dataset Management Command: Supported 00:32:38.723 Write Zeroes Command: Supported 00:32:38.723 Set Features Save Field: Supported 00:32:38.723 Reservations: Not Supported 00:32:38.723 Timestamp: Supported 00:32:38.723 Copy: Supported 00:32:38.723 Volatile Write Cache: Present 00:32:38.723 Atomic Write Unit (Normal): 1 00:32:38.723 Atomic Write Unit (PFail): 1 00:32:38.723 Atomic Compare & Write Unit: 1 00:32:38.723 Fused Compare & Write: Not Supported 00:32:38.723 Scatter-Gather List 00:32:38.723 SGL Command Set: Supported 00:32:38.723 SGL Keyed: Not Supported 00:32:38.723 SGL Bit Bucket Descriptor: Not Supported 00:32:38.723 SGL Metadata Pointer: Not Supported 00:32:38.723 Oversized SGL: Not Supported 00:32:38.723 SGL Metadata Address: Not Supported 00:32:38.723 SGL Offset: Not Supported 00:32:38.723 Transport SGL Data Block: Not Supported 00:32:38.723 Replay Protected Memory Block: Not Supported 00:32:38.723 00:32:38.723 Firmware Slot Information 00:32:38.723 ========================= 00:32:38.723 Active slot: 1 00:32:38.723 Slot 1 Firmware Revision: 1.0 00:32:38.723 00:32:38.723 00:32:38.723 Commands Supported and Effects 00:32:38.723 ============================== 00:32:38.723 Admin Commands 00:32:38.723 -------------- 00:32:38.723 Delete I/O Submission Queue (00h): Supported 00:32:38.723 Create I/O Submission Queue (01h): Supported 00:32:38.723 Get Log Page (02h): Supported 00:32:38.723 Delete I/O Completion Queue (04h): Supported 00:32:38.723 Create I/O Completion Queue (05h): Supported 00:32:38.723 Identify (06h): Supported 00:32:38.723 Abort (08h): Supported 00:32:38.723 Set Features (09h): Supported 00:32:38.723 Get Features (0Ah): Supported 00:32:38.724 Asynchronous Event Request (0Ch): Supported 00:32:38.724 Namespace Attachment (15h): Supported NS-Inventory-Change 00:32:38.724 Directive Send (19h): Supported 00:32:38.724 Directive Receive (1Ah): Supported 00:32:38.724 Virtualization Management (1Ch): Supported 00:32:38.724 Doorbell Buffer Config (7Ch): Supported 00:32:38.724 Format NVM (80h): Supported LBA-Change 00:32:38.724 I/O Commands 00:32:38.724 ------------ 00:32:38.724 Flush (00h): Supported LBA-Change 00:32:38.724 Write (01h): Supported LBA-Change 00:32:38.724 Read (02h): Supported 00:32:38.724 Compare (05h): Supported 00:32:38.724 Write Zeroes (08h): Supported LBA-Change 00:32:38.724 Dataset Management (09h): Supported LBA-Change 00:32:38.724 Unknown (0Ch): Supported 00:32:38.724 Unknown (12h): Supported 00:32:38.724 Copy (19h): Supported LBA-Change 00:32:38.724 Unknown (1Dh): Supported LBA-Change 00:32:38.724 00:32:38.724 Error Log 00:32:38.724 ========= 00:32:38.724 00:32:38.724 Arbitration 00:32:38.724 =========== 00:32:38.724 Arbitration Burst: no limit 00:32:38.724 00:32:38.724 Power Management 00:32:38.724 ================ 00:32:38.724 Number of Power States: 1 00:32:38.724 Current Power State: Power State #0 00:32:38.724 Power State #0: 00:32:38.724 Max Power: 25.00 W 00:32:38.724 Non-Operational State: Operational 00:32:38.724 Entry Latency: 16 microseconds 00:32:38.724 Exit Latency: 4 microseconds 00:32:38.724 Relative Read Throughput: 0 00:32:38.724 Relative Read Latency: 0 00:32:38.724 Relative Write Throughput: 0 00:32:38.724 Relative Write Latency: 0 00:32:38.724 Idle Power: Not Reported 00:32:38.724 Active Power: Not Reported 00:32:38.724 Non-Operational Permissive Mode: Not Supported 00:32:38.724 00:32:38.724 Health Information 00:32:38.724 ================== 00:32:38.724 Critical Warnings: 00:32:38.724 Available Spare Space: OK 00:32:38.724 Temperature: OK 00:32:38.724 Device Reliability: OK 00:32:38.724 Read Only: No 00:32:38.724 Volatile Memory Backup: OK 00:32:38.724 Current Temperature: 323 Kelvin (50 Celsius) 00:32:38.724 Temperature Threshold: 343 Kelvin (70 Celsius) 00:32:38.724 Available Spare: 0% 00:32:38.724 Available Spare Threshold: 0% 00:32:38.724 Life Percentage Used: 0% 00:32:38.724 Data Units Read: 685 00:32:38.724 Data Units Written: 613 00:32:38.724 Host Read Commands: 31173 00:32:38.724 Host Write Commands: 30959 00:32:38.724 Controller Busy Time: 0 minutes 00:32:38.724 Power Cycles: 0 00:32:38.724 Power On Hours: 0 hours 00:32:38.724 Unsafe Shutdowns: 0 00:32:38.724 Unrecoverable Media Errors: 0 00:32:38.724 Lifetime Error Log Entries: 0 00:32:38.724 Warning Temperature Time: 0 minutes 00:32:38.724 Critical Temperature Time: 0 minutes 00:32:38.724 00:32:38.724 Number of Queues 00:32:38.724 ================ 00:32:38.724 Number of I/O Submission Queues: 64 00:32:38.724 Number of I/O Completion Queues: 64 00:32:38.724 00:32:38.724 ZNS Specific Controller Data 00:32:38.724 ============================ 00:32:38.724 Zone Append Size Limit: 0 00:32:38.724 00:32:38.724 00:32:38.724 Active Namespaces 00:32:38.724 ================= 00:32:38.724 Namespace ID:1 00:32:38.724 Error Recovery Timeout: Unlimited 00:32:38.724 Command Set Identifier: NVM (00h) 00:32:38.724 Deallocate: Supported 00:32:38.724 Deallocated/Unwritten Error: Supported 00:32:38.724 Deallocated Read Value: All 0x00 00:32:38.724 Deallocate in Write Zeroes: Not Supported 00:32:38.724 Deallocated Guard Field: 0xFFFF 00:32:38.724 Flush: Supported 00:32:38.724 Reservation: Not Supported 00:32:38.724 Metadata Transferred as: Separate Metadata Buffer 00:32:38.724 Namespace Sharing Capabilities: Private 00:32:38.724 Size (in LBAs): 1548666 (5GiB) 00:32:38.724 Capacity (in LBAs): 1548666 (5GiB) 00:32:38.724 Utilization (in LBAs): 1548666 (5GiB) 00:32:38.724 Thin Provisioning: Not Supported 00:32:38.724 Per-NS Atomic Units: No 00:32:38.724 Maximum Single Source Range Length: 128 00:32:38.724 Maximum Copy Length: 128 00:32:38.724 Maximum Source Range Count: 128 00:32:38.724 NGUID/EUI64 Never Reused: No 00:32:38.724 Namespace Write Protected: No 00:32:38.724 Number of LBA Formats: 8 00:32:38.724 Current LBA Format: LBA Format #07 00:32:38.724 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:38.724 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:38.724 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:38.724 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:38.724 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:38.724 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:38.724 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:38.724 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:38.724 00:32:38.724 NVM Specific Namespace Data 00:32:38.724 =========================== 00:32:38.724 Logical Block Storage Tag Mask: 0 00:32:38.724 Protection Information Capabilities: 00:32:38.724 16b Guard Protection Information Storage Tag Support: No 00:32:38.724 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:32:38.724 Storage Tag Check Read Support: No 00:32:38.724 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.724 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.724 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.724 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.724 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.724 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.724 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.724 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:38.724 13:53:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:32:38.724 13:53:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:32:38.984 ===================================================== 00:32:38.984 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:32:38.984 ===================================================== 00:32:38.984 Controller Capabilities/Features 00:32:38.984 ================================ 00:32:38.984 Vendor ID: 1b36 00:32:38.984 Subsystem Vendor ID: 1af4 00:32:38.984 Serial Number: 12341 00:32:38.984 Model Number: QEMU NVMe Ctrl 00:32:38.984 Firmware Version: 8.0.0 00:32:38.984 Recommended Arb Burst: 6 00:32:38.984 IEEE OUI Identifier: 00 54 52 00:32:38.984 Multi-path I/O 00:32:38.984 May have multiple subsystem ports: No 00:32:38.984 May have multiple controllers: No 00:32:38.984 Associated with SR-IOV VF: No 00:32:38.984 Max Data Transfer Size: 524288 00:32:38.984 Max Number of Namespaces: 256 00:32:38.984 Max Number of I/O Queues: 64 00:32:38.984 NVMe Specification Version (VS): 1.4 00:32:38.984 NVMe Specification Version (Identify): 1.4 00:32:38.984 Maximum Queue Entries: 2048 00:32:38.984 Contiguous Queues Required: Yes 00:32:38.984 Arbitration Mechanisms Supported 00:32:38.984 Weighted Round Robin: Not Supported 00:32:38.984 Vendor Specific: Not Supported 00:32:38.984 Reset Timeout: 7500 ms 00:32:38.984 Doorbell Stride: 4 bytes 00:32:38.984 NVM Subsystem Reset: Not Supported 00:32:38.984 Command Sets Supported 00:32:38.984 NVM Command Set: Supported 00:32:38.984 Boot Partition: Not Supported 00:32:38.984 Memory Page Size Minimum: 4096 bytes 00:32:38.984 Memory Page Size Maximum: 65536 bytes 00:32:38.984 Persistent Memory Region: Not Supported 00:32:38.984 Optional Asynchronous Events Supported 00:32:38.984 Namespace Attribute Notices: Supported 00:32:38.984 Firmware Activation Notices: Not Supported 00:32:38.984 ANA Change Notices: Not Supported 00:32:38.984 PLE Aggregate Log Change Notices: Not Supported 00:32:38.984 LBA Status Info Alert Notices: Not Supported 00:32:38.984 EGE Aggregate Log Change Notices: Not Supported 00:32:38.984 Normal NVM Subsystem Shutdown event: Not Supported 00:32:38.984 Zone Descriptor Change Notices: Not Supported 00:32:38.984 Discovery Log Change Notices: Not Supported 00:32:38.984 Controller Attributes 00:32:38.984 128-bit Host Identifier: Not Supported 00:32:38.984 Non-Operational Permissive Mode: Not Supported 00:32:38.984 NVM Sets: Not Supported 00:32:38.984 Read Recovery Levels: Not Supported 00:32:38.984 Endurance Groups: Not Supported 00:32:38.984 Predictable Latency Mode: Not Supported 00:32:38.984 Traffic Based Keep ALive: Not Supported 00:32:38.984 Namespace Granularity: Not Supported 00:32:38.984 SQ Associations: Not Supported 00:32:38.984 UUID List: Not Supported 00:32:38.984 Multi-Domain Subsystem: Not Supported 00:32:38.984 Fixed Capacity Management: Not Supported 00:32:38.984 Variable Capacity Management: Not Supported 00:32:38.984 Delete Endurance Group: Not Supported 00:32:38.984 Delete NVM Set: Not Supported 00:32:38.984 Extended LBA Formats Supported: Supported 00:32:38.984 Flexible Data Placement Supported: Not Supported 00:32:38.984 00:32:38.984 Controller Memory Buffer Support 00:32:38.984 ================================ 00:32:38.984 Supported: No 00:32:38.984 00:32:38.984 Persistent Memory Region Support 00:32:38.984 ================================ 00:32:38.984 Supported: No 00:32:38.984 00:32:38.984 Admin Command Set Attributes 00:32:38.984 ============================ 00:32:38.984 Security Send/Receive: Not Supported 00:32:38.984 Format NVM: Supported 00:32:38.985 Firmware Activate/Download: Not Supported 00:32:38.985 Namespace Management: Supported 00:32:38.985 Device Self-Test: Not Supported 00:32:38.985 Directives: Supported 00:32:38.985 NVMe-MI: Not Supported 00:32:38.985 Virtualization Management: Not Supported 00:32:38.985 Doorbell Buffer Config: Supported 00:32:38.985 Get LBA Status Capability: Not Supported 00:32:38.985 Command & Feature Lockdown Capability: Not Supported 00:32:38.985 Abort Command Limit: 4 00:32:38.985 Async Event Request Limit: 4 00:32:38.985 Number of Firmware Slots: N/A 00:32:38.985 Firmware Slot 1 Read-Only: N/A 00:32:38.985 Firmware Activation Without Reset: N/A 00:32:38.985 Multiple Update Detection Support: N/A 00:32:38.985 Firmware Update Granularity: No Information Provided 00:32:38.985 Per-Namespace SMART Log: Yes 00:32:38.985 Asymmetric Namespace Access Log Page: Not Supported 00:32:38.985 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:32:38.985 Command Effects Log Page: Supported 00:32:38.985 Get Log Page Extended Data: Supported 00:32:38.985 Telemetry Log Pages: Not Supported 00:32:38.985 Persistent Event Log Pages: Not Supported 00:32:38.985 Supported Log Pages Log Page: May Support 00:32:38.985 Commands Supported & Effects Log Page: Not Supported 00:32:38.985 Feature Identifiers & Effects Log Page:May Support 00:32:38.985 NVMe-MI Commands & Effects Log Page: May Support 00:32:38.985 Data Area 4 for Telemetry Log: Not Supported 00:32:38.985 Error Log Page Entries Supported: 1 00:32:38.985 Keep Alive: Not Supported 00:32:38.985 00:32:38.985 NVM Command Set Attributes 00:32:38.985 ========================== 00:32:38.985 Submission Queue Entry Size 00:32:38.985 Max: 64 00:32:38.985 Min: 64 00:32:38.985 Completion Queue Entry Size 00:32:38.985 Max: 16 00:32:38.985 Min: 16 00:32:38.985 Number of Namespaces: 256 00:32:38.985 Compare Command: Supported 00:32:38.985 Write Uncorrectable Command: Not Supported 00:32:38.985 Dataset Management Command: Supported 00:32:38.985 Write Zeroes Command: Supported 00:32:38.985 Set Features Save Field: Supported 00:32:38.985 Reservations: Not Supported 00:32:38.985 Timestamp: Supported 00:32:38.985 Copy: Supported 00:32:38.985 Volatile Write Cache: Present 00:32:38.985 Atomic Write Unit (Normal): 1 00:32:38.985 Atomic Write Unit (PFail): 1 00:32:38.985 Atomic Compare & Write Unit: 1 00:32:38.985 Fused Compare & Write: Not Supported 00:32:38.985 Scatter-Gather List 00:32:38.985 SGL Command Set: Supported 00:32:38.985 SGL Keyed: Not Supported 00:32:38.985 SGL Bit Bucket Descriptor: Not Supported 00:32:38.985 SGL Metadata Pointer: Not Supported 00:32:38.985 Oversized SGL: Not Supported 00:32:38.985 SGL Metadata Address: Not Supported 00:32:38.985 SGL Offset: Not Supported 00:32:38.985 Transport SGL Data Block: Not Supported 00:32:38.985 Replay Protected Memory Block: Not Supported 00:32:38.985 00:32:38.985 Firmware Slot Information 00:32:38.985 ========================= 00:32:38.985 Active slot: 1 00:32:38.985 Slot 1 Firmware Revision: 1.0 00:32:38.985 00:32:38.985 00:32:38.985 Commands Supported and Effects 00:32:38.985 ============================== 00:32:38.985 Admin Commands 00:32:38.985 -------------- 00:32:38.985 Delete I/O Submission Queue (00h): Supported 00:32:38.985 Create I/O Submission Queue (01h): Supported 00:32:38.985 Get Log Page (02h): Supported 00:32:38.985 Delete I/O Completion Queue (04h): Supported 00:32:38.985 Create I/O Completion Queue (05h): Supported 00:32:38.985 Identify (06h): Supported 00:32:38.985 Abort (08h): Supported 00:32:38.985 Set Features (09h): Supported 00:32:38.985 Get Features (0Ah): Supported 00:32:38.985 Asynchronous Event Request (0Ch): Supported 00:32:38.985 Namespace Attachment (15h): Supported NS-Inventory-Change 00:32:38.985 Directive Send (19h): Supported 00:32:38.985 Directive Receive (1Ah): Supported 00:32:38.985 Virtualization Management (1Ch): Supported 00:32:38.985 Doorbell Buffer Config (7Ch): Supported 00:32:38.985 Format NVM (80h): Supported LBA-Change 00:32:38.985 I/O Commands 00:32:38.985 ------------ 00:32:38.985 Flush (00h): Supported LBA-Change 00:32:38.985 Write (01h): Supported LBA-Change 00:32:38.985 Read (02h): Supported 00:32:38.985 Compare (05h): Supported 00:32:38.985 Write Zeroes (08h): Supported LBA-Change 00:32:38.985 Dataset Management (09h): Supported LBA-Change 00:32:38.985 Unknown (0Ch): Supported 00:32:38.985 Unknown (12h): Supported 00:32:38.985 Copy (19h): Supported LBA-Change 00:32:38.985 Unknown (1Dh): Supported LBA-Change 00:32:38.985 00:32:38.985 Error Log 00:32:38.985 ========= 00:32:38.985 00:32:38.985 Arbitration 00:32:38.985 =========== 00:32:38.985 Arbitration Burst: no limit 00:32:38.985 00:32:38.985 Power Management 00:32:38.985 ================ 00:32:38.985 Number of Power States: 1 00:32:38.985 Current Power State: Power State #0 00:32:38.985 Power State #0: 00:32:38.985 Max Power: 25.00 W 00:32:38.985 Non-Operational State: Operational 00:32:38.985 Entry Latency: 16 microseconds 00:32:38.985 Exit Latency: 4 microseconds 00:32:38.985 Relative Read Throughput: 0 00:32:38.985 Relative Read Latency: 0 00:32:38.985 Relative Write Throughput: 0 00:32:38.985 Relative Write Latency: 0 00:32:39.245 Idle Power: Not Reported 00:32:39.245 Active Power: Not Reported 00:32:39.245 Non-Operational Permissive Mode: Not Supported 00:32:39.245 00:32:39.245 Health Information 00:32:39.245 ================== 00:32:39.245 Critical Warnings: 00:32:39.245 Available Spare Space: OK 00:32:39.245 Temperature: OK 00:32:39.245 Device Reliability: OK 00:32:39.245 Read Only: No 00:32:39.245 Volatile Memory Backup: OK 00:32:39.245 Current Temperature: 323 Kelvin (50 Celsius) 00:32:39.245 Temperature Threshold: 343 Kelvin (70 Celsius) 00:32:39.245 Available Spare: 0% 00:32:39.245 Available Spare Threshold: 0% 00:32:39.245 Life Percentage Used: 0% 00:32:39.245 Data Units Read: 1068 00:32:39.245 Data Units Written: 930 00:32:39.245 Host Read Commands: 47099 00:32:39.245 Host Write Commands: 45792 00:32:39.245 Controller Busy Time: 0 minutes 00:32:39.245 Power Cycles: 0 00:32:39.245 Power On Hours: 0 hours 00:32:39.245 Unsafe Shutdowns: 0 00:32:39.245 Unrecoverable Media Errors: 0 00:32:39.245 Lifetime Error Log Entries: 0 00:32:39.245 Warning Temperature Time: 0 minutes 00:32:39.245 Critical Temperature Time: 0 minutes 00:32:39.245 00:32:39.245 Number of Queues 00:32:39.245 ================ 00:32:39.245 Number of I/O Submission Queues: 64 00:32:39.245 Number of I/O Completion Queues: 64 00:32:39.245 00:32:39.245 ZNS Specific Controller Data 00:32:39.245 ============================ 00:32:39.245 Zone Append Size Limit: 0 00:32:39.245 00:32:39.245 00:32:39.245 Active Namespaces 00:32:39.245 ================= 00:32:39.245 Namespace ID:1 00:32:39.245 Error Recovery Timeout: Unlimited 00:32:39.245 Command Set Identifier: NVM (00h) 00:32:39.245 Deallocate: Supported 00:32:39.245 Deallocated/Unwritten Error: Supported 00:32:39.245 Deallocated Read Value: All 0x00 00:32:39.245 Deallocate in Write Zeroes: Not Supported 00:32:39.245 Deallocated Guard Field: 0xFFFF 00:32:39.245 Flush: Supported 00:32:39.245 Reservation: Not Supported 00:32:39.245 Namespace Sharing Capabilities: Private 00:32:39.245 Size (in LBAs): 1310720 (5GiB) 00:32:39.245 Capacity (in LBAs): 1310720 (5GiB) 00:32:39.245 Utilization (in LBAs): 1310720 (5GiB) 00:32:39.245 Thin Provisioning: Not Supported 00:32:39.245 Per-NS Atomic Units: No 00:32:39.245 Maximum Single Source Range Length: 128 00:32:39.245 Maximum Copy Length: 128 00:32:39.245 Maximum Source Range Count: 128 00:32:39.245 NGUID/EUI64 Never Reused: No 00:32:39.245 Namespace Write Protected: No 00:32:39.245 Number of LBA Formats: 8 00:32:39.245 Current LBA Format: LBA Format #04 00:32:39.245 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:39.245 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:39.245 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:39.245 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:39.245 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:39.245 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:39.245 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:39.245 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:39.245 00:32:39.245 NVM Specific Namespace Data 00:32:39.245 =========================== 00:32:39.245 Logical Block Storage Tag Mask: 0 00:32:39.245 Protection Information Capabilities: 00:32:39.245 16b Guard Protection Information Storage Tag Support: No 00:32:39.245 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:32:39.245 Storage Tag Check Read Support: No 00:32:39.245 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.245 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.246 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.246 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.246 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.246 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.246 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.246 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.246 13:53:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:32:39.246 13:53:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:32:39.506 ===================================================== 00:32:39.506 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:32:39.506 ===================================================== 00:32:39.506 Controller Capabilities/Features 00:32:39.506 ================================ 00:32:39.506 Vendor ID: 1b36 00:32:39.506 Subsystem Vendor ID: 1af4 00:32:39.506 Serial Number: 12342 00:32:39.506 Model Number: QEMU NVMe Ctrl 00:32:39.506 Firmware Version: 8.0.0 00:32:39.506 Recommended Arb Burst: 6 00:32:39.506 IEEE OUI Identifier: 00 54 52 00:32:39.506 Multi-path I/O 00:32:39.506 May have multiple subsystem ports: No 00:32:39.506 May have multiple controllers: No 00:32:39.506 Associated with SR-IOV VF: No 00:32:39.506 Max Data Transfer Size: 524288 00:32:39.506 Max Number of Namespaces: 256 00:32:39.506 Max Number of I/O Queues: 64 00:32:39.506 NVMe Specification Version (VS): 1.4 00:32:39.506 NVMe Specification Version (Identify): 1.4 00:32:39.506 Maximum Queue Entries: 2048 00:32:39.506 Contiguous Queues Required: Yes 00:32:39.506 Arbitration Mechanisms Supported 00:32:39.506 Weighted Round Robin: Not Supported 00:32:39.506 Vendor Specific: Not Supported 00:32:39.506 Reset Timeout: 7500 ms 00:32:39.506 Doorbell Stride: 4 bytes 00:32:39.506 NVM Subsystem Reset: Not Supported 00:32:39.506 Command Sets Supported 00:32:39.506 NVM Command Set: Supported 00:32:39.506 Boot Partition: Not Supported 00:32:39.506 Memory Page Size Minimum: 4096 bytes 00:32:39.506 Memory Page Size Maximum: 65536 bytes 00:32:39.506 Persistent Memory Region: Not Supported 00:32:39.506 Optional Asynchronous Events Supported 00:32:39.506 Namespace Attribute Notices: Supported 00:32:39.506 Firmware Activation Notices: Not Supported 00:32:39.506 ANA Change Notices: Not Supported 00:32:39.506 PLE Aggregate Log Change Notices: Not Supported 00:32:39.506 LBA Status Info Alert Notices: Not Supported 00:32:39.506 EGE Aggregate Log Change Notices: Not Supported 00:32:39.506 Normal NVM Subsystem Shutdown event: Not Supported 00:32:39.506 Zone Descriptor Change Notices: Not Supported 00:32:39.506 Discovery Log Change Notices: Not Supported 00:32:39.506 Controller Attributes 00:32:39.506 128-bit Host Identifier: Not Supported 00:32:39.506 Non-Operational Permissive Mode: Not Supported 00:32:39.506 NVM Sets: Not Supported 00:32:39.506 Read Recovery Levels: Not Supported 00:32:39.506 Endurance Groups: Not Supported 00:32:39.506 Predictable Latency Mode: Not Supported 00:32:39.506 Traffic Based Keep ALive: Not Supported 00:32:39.506 Namespace Granularity: Not Supported 00:32:39.506 SQ Associations: Not Supported 00:32:39.506 UUID List: Not Supported 00:32:39.506 Multi-Domain Subsystem: Not Supported 00:32:39.506 Fixed Capacity Management: Not Supported 00:32:39.506 Variable Capacity Management: Not Supported 00:32:39.506 Delete Endurance Group: Not Supported 00:32:39.506 Delete NVM Set: Not Supported 00:32:39.506 Extended LBA Formats Supported: Supported 00:32:39.506 Flexible Data Placement Supported: Not Supported 00:32:39.506 00:32:39.506 Controller Memory Buffer Support 00:32:39.507 ================================ 00:32:39.507 Supported: No 00:32:39.507 00:32:39.507 Persistent Memory Region Support 00:32:39.507 ================================ 00:32:39.507 Supported: No 00:32:39.507 00:32:39.507 Admin Command Set Attributes 00:32:39.507 ============================ 00:32:39.507 Security Send/Receive: Not Supported 00:32:39.507 Format NVM: Supported 00:32:39.507 Firmware Activate/Download: Not Supported 00:32:39.507 Namespace Management: Supported 00:32:39.507 Device Self-Test: Not Supported 00:32:39.507 Directives: Supported 00:32:39.507 NVMe-MI: Not Supported 00:32:39.507 Virtualization Management: Not Supported 00:32:39.507 Doorbell Buffer Config: Supported 00:32:39.507 Get LBA Status Capability: Not Supported 00:32:39.507 Command & Feature Lockdown Capability: Not Supported 00:32:39.507 Abort Command Limit: 4 00:32:39.507 Async Event Request Limit: 4 00:32:39.507 Number of Firmware Slots: N/A 00:32:39.507 Firmware Slot 1 Read-Only: N/A 00:32:39.507 Firmware Activation Without Reset: N/A 00:32:39.507 Multiple Update Detection Support: N/A 00:32:39.507 Firmware Update Granularity: No Information Provided 00:32:39.507 Per-Namespace SMART Log: Yes 00:32:39.507 Asymmetric Namespace Access Log Page: Not Supported 00:32:39.507 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:32:39.507 Command Effects Log Page: Supported 00:32:39.507 Get Log Page Extended Data: Supported 00:32:39.507 Telemetry Log Pages: Not Supported 00:32:39.507 Persistent Event Log Pages: Not Supported 00:32:39.507 Supported Log Pages Log Page: May Support 00:32:39.507 Commands Supported & Effects Log Page: Not Supported 00:32:39.507 Feature Identifiers & Effects Log Page:May Support 00:32:39.507 NVMe-MI Commands & Effects Log Page: May Support 00:32:39.507 Data Area 4 for Telemetry Log: Not Supported 00:32:39.507 Error Log Page Entries Supported: 1 00:32:39.507 Keep Alive: Not Supported 00:32:39.507 00:32:39.507 NVM Command Set Attributes 00:32:39.507 ========================== 00:32:39.507 Submission Queue Entry Size 00:32:39.507 Max: 64 00:32:39.507 Min: 64 00:32:39.507 Completion Queue Entry Size 00:32:39.507 Max: 16 00:32:39.507 Min: 16 00:32:39.507 Number of Namespaces: 256 00:32:39.507 Compare Command: Supported 00:32:39.507 Write Uncorrectable Command: Not Supported 00:32:39.507 Dataset Management Command: Supported 00:32:39.507 Write Zeroes Command: Supported 00:32:39.507 Set Features Save Field: Supported 00:32:39.507 Reservations: Not Supported 00:32:39.507 Timestamp: Supported 00:32:39.507 Copy: Supported 00:32:39.507 Volatile Write Cache: Present 00:32:39.507 Atomic Write Unit (Normal): 1 00:32:39.507 Atomic Write Unit (PFail): 1 00:32:39.507 Atomic Compare & Write Unit: 1 00:32:39.507 Fused Compare & Write: Not Supported 00:32:39.507 Scatter-Gather List 00:32:39.507 SGL Command Set: Supported 00:32:39.507 SGL Keyed: Not Supported 00:32:39.507 SGL Bit Bucket Descriptor: Not Supported 00:32:39.507 SGL Metadata Pointer: Not Supported 00:32:39.507 Oversized SGL: Not Supported 00:32:39.507 SGL Metadata Address: Not Supported 00:32:39.507 SGL Offset: Not Supported 00:32:39.507 Transport SGL Data Block: Not Supported 00:32:39.507 Replay Protected Memory Block: Not Supported 00:32:39.507 00:32:39.507 Firmware Slot Information 00:32:39.507 ========================= 00:32:39.507 Active slot: 1 00:32:39.507 Slot 1 Firmware Revision: 1.0 00:32:39.507 00:32:39.507 00:32:39.507 Commands Supported and Effects 00:32:39.507 ============================== 00:32:39.507 Admin Commands 00:32:39.507 -------------- 00:32:39.507 Delete I/O Submission Queue (00h): Supported 00:32:39.507 Create I/O Submission Queue (01h): Supported 00:32:39.507 Get Log Page (02h): Supported 00:32:39.507 Delete I/O Completion Queue (04h): Supported 00:32:39.507 Create I/O Completion Queue (05h): Supported 00:32:39.507 Identify (06h): Supported 00:32:39.507 Abort (08h): Supported 00:32:39.507 Set Features (09h): Supported 00:32:39.507 Get Features (0Ah): Supported 00:32:39.507 Asynchronous Event Request (0Ch): Supported 00:32:39.507 Namespace Attachment (15h): Supported NS-Inventory-Change 00:32:39.507 Directive Send (19h): Supported 00:32:39.507 Directive Receive (1Ah): Supported 00:32:39.507 Virtualization Management (1Ch): Supported 00:32:39.507 Doorbell Buffer Config (7Ch): Supported 00:32:39.507 Format NVM (80h): Supported LBA-Change 00:32:39.507 I/O Commands 00:32:39.507 ------------ 00:32:39.507 Flush (00h): Supported LBA-Change 00:32:39.507 Write (01h): Supported LBA-Change 00:32:39.507 Read (02h): Supported 00:32:39.507 Compare (05h): Supported 00:32:39.507 Write Zeroes (08h): Supported LBA-Change 00:32:39.507 Dataset Management (09h): Supported LBA-Change 00:32:39.507 Unknown (0Ch): Supported 00:32:39.507 Unknown (12h): Supported 00:32:39.507 Copy (19h): Supported LBA-Change 00:32:39.507 Unknown (1Dh): Supported LBA-Change 00:32:39.507 00:32:39.507 Error Log 00:32:39.507 ========= 00:32:39.507 00:32:39.507 Arbitration 00:32:39.507 =========== 00:32:39.507 Arbitration Burst: no limit 00:32:39.507 00:32:39.507 Power Management 00:32:39.507 ================ 00:32:39.507 Number of Power States: 1 00:32:39.507 Current Power State: Power State #0 00:32:39.507 Power State #0: 00:32:39.507 Max Power: 25.00 W 00:32:39.507 Non-Operational State: Operational 00:32:39.507 Entry Latency: 16 microseconds 00:32:39.507 Exit Latency: 4 microseconds 00:32:39.507 Relative Read Throughput: 0 00:32:39.507 Relative Read Latency: 0 00:32:39.507 Relative Write Throughput: 0 00:32:39.507 Relative Write Latency: 0 00:32:39.507 Idle Power: Not Reported 00:32:39.507 Active Power: Not Reported 00:32:39.507 Non-Operational Permissive Mode: Not Supported 00:32:39.507 00:32:39.507 Health Information 00:32:39.507 ================== 00:32:39.507 Critical Warnings: 00:32:39.507 Available Spare Space: OK 00:32:39.507 Temperature: OK 00:32:39.507 Device Reliability: OK 00:32:39.507 Read Only: No 00:32:39.507 Volatile Memory Backup: OK 00:32:39.507 Current Temperature: 323 Kelvin (50 Celsius) 00:32:39.507 Temperature Threshold: 343 Kelvin (70 Celsius) 00:32:39.507 Available Spare: 0% 00:32:39.507 Available Spare Threshold: 0% 00:32:39.507 Life Percentage Used: 0% 00:32:39.507 Data Units Read: 2110 00:32:39.507 Data Units Written: 1897 00:32:39.507 Host Read Commands: 94605 00:32:39.507 Host Write Commands: 92875 00:32:39.507 Controller Busy Time: 0 minutes 00:32:39.507 Power Cycles: 0 00:32:39.507 Power On Hours: 0 hours 00:32:39.507 Unsafe Shutdowns: 0 00:32:39.507 Unrecoverable Media Errors: 0 00:32:39.507 Lifetime Error Log Entries: 0 00:32:39.507 Warning Temperature Time: 0 minutes 00:32:39.507 Critical Temperature Time: 0 minutes 00:32:39.507 00:32:39.507 Number of Queues 00:32:39.507 ================ 00:32:39.507 Number of I/O Submission Queues: 64 00:32:39.507 Number of I/O Completion Queues: 64 00:32:39.507 00:32:39.507 ZNS Specific Controller Data 00:32:39.507 ============================ 00:32:39.507 Zone Append Size Limit: 0 00:32:39.507 00:32:39.507 00:32:39.507 Active Namespaces 00:32:39.507 ================= 00:32:39.507 Namespace ID:1 00:32:39.507 Error Recovery Timeout: Unlimited 00:32:39.507 Command Set Identifier: NVM (00h) 00:32:39.507 Deallocate: Supported 00:32:39.507 Deallocated/Unwritten Error: Supported 00:32:39.507 Deallocated Read Value: All 0x00 00:32:39.507 Deallocate in Write Zeroes: Not Supported 00:32:39.507 Deallocated Guard Field: 0xFFFF 00:32:39.507 Flush: Supported 00:32:39.507 Reservation: Not Supported 00:32:39.507 Namespace Sharing Capabilities: Private 00:32:39.507 Size (in LBAs): 1048576 (4GiB) 00:32:39.507 Capacity (in LBAs): 1048576 (4GiB) 00:32:39.507 Utilization (in LBAs): 1048576 (4GiB) 00:32:39.507 Thin Provisioning: Not Supported 00:32:39.507 Per-NS Atomic Units: No 00:32:39.507 Maximum Single Source Range Length: 128 00:32:39.507 Maximum Copy Length: 128 00:32:39.507 Maximum Source Range Count: 128 00:32:39.507 NGUID/EUI64 Never Reused: No 00:32:39.507 Namespace Write Protected: No 00:32:39.507 Number of LBA Formats: 8 00:32:39.507 Current LBA Format: LBA Format #04 00:32:39.507 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:39.507 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:39.507 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:39.507 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:39.507 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:39.507 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:39.507 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:39.507 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:39.507 00:32:39.508 NVM Specific Namespace Data 00:32:39.508 =========================== 00:32:39.508 Logical Block Storage Tag Mask: 0 00:32:39.508 Protection Information Capabilities: 00:32:39.508 16b Guard Protection Information Storage Tag Support: No 00:32:39.508 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:32:39.508 Storage Tag Check Read Support: No 00:32:39.508 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Namespace ID:2 00:32:39.508 Error Recovery Timeout: Unlimited 00:32:39.508 Command Set Identifier: NVM (00h) 00:32:39.508 Deallocate: Supported 00:32:39.508 Deallocated/Unwritten Error: Supported 00:32:39.508 Deallocated Read Value: All 0x00 00:32:39.508 Deallocate in Write Zeroes: Not Supported 00:32:39.508 Deallocated Guard Field: 0xFFFF 00:32:39.508 Flush: Supported 00:32:39.508 Reservation: Not Supported 00:32:39.508 Namespace Sharing Capabilities: Private 00:32:39.508 Size (in LBAs): 1048576 (4GiB) 00:32:39.508 Capacity (in LBAs): 1048576 (4GiB) 00:32:39.508 Utilization (in LBAs): 1048576 (4GiB) 00:32:39.508 Thin Provisioning: Not Supported 00:32:39.508 Per-NS Atomic Units: No 00:32:39.508 Maximum Single Source Range Length: 128 00:32:39.508 Maximum Copy Length: 128 00:32:39.508 Maximum Source Range Count: 128 00:32:39.508 NGUID/EUI64 Never Reused: No 00:32:39.508 Namespace Write Protected: No 00:32:39.508 Number of LBA Formats: 8 00:32:39.508 Current LBA Format: LBA Format #04 00:32:39.508 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:39.508 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:39.508 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:39.508 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:39.508 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:39.508 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:39.508 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:39.508 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:39.508 00:32:39.508 NVM Specific Namespace Data 00:32:39.508 =========================== 00:32:39.508 Logical Block Storage Tag Mask: 0 00:32:39.508 Protection Information Capabilities: 00:32:39.508 16b Guard Protection Information Storage Tag Support: No 00:32:39.508 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:32:39.508 Storage Tag Check Read Support: No 00:32:39.508 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Namespace ID:3 00:32:39.508 Error Recovery Timeout: Unlimited 00:32:39.508 Command Set Identifier: NVM (00h) 00:32:39.508 Deallocate: Supported 00:32:39.508 Deallocated/Unwritten Error: Supported 00:32:39.508 Deallocated Read Value: All 0x00 00:32:39.508 Deallocate in Write Zeroes: Not Supported 00:32:39.508 Deallocated Guard Field: 0xFFFF 00:32:39.508 Flush: Supported 00:32:39.508 Reservation: Not Supported 00:32:39.508 Namespace Sharing Capabilities: Private 00:32:39.508 Size (in LBAs): 1048576 (4GiB) 00:32:39.508 Capacity (in LBAs): 1048576 (4GiB) 00:32:39.508 Utilization (in LBAs): 1048576 (4GiB) 00:32:39.508 Thin Provisioning: Not Supported 00:32:39.508 Per-NS Atomic Units: No 00:32:39.508 Maximum Single Source Range Length: 128 00:32:39.508 Maximum Copy Length: 128 00:32:39.508 Maximum Source Range Count: 128 00:32:39.508 NGUID/EUI64 Never Reused: No 00:32:39.508 Namespace Write Protected: No 00:32:39.508 Number of LBA Formats: 8 00:32:39.508 Current LBA Format: LBA Format #04 00:32:39.508 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:39.508 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:39.508 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:39.508 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:39.508 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:39.508 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:39.508 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:39.508 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:39.508 00:32:39.508 NVM Specific Namespace Data 00:32:39.508 =========================== 00:32:39.508 Logical Block Storage Tag Mask: 0 00:32:39.508 Protection Information Capabilities: 00:32:39.508 16b Guard Protection Information Storage Tag Support: No 00:32:39.508 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:32:39.508 Storage Tag Check Read Support: No 00:32:39.508 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.508 13:53:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:32:39.508 13:53:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:32:39.768 ===================================================== 00:32:39.768 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:32:39.768 ===================================================== 00:32:39.768 Controller Capabilities/Features 00:32:39.768 ================================ 00:32:39.768 Vendor ID: 1b36 00:32:39.768 Subsystem Vendor ID: 1af4 00:32:39.768 Serial Number: 12343 00:32:39.768 Model Number: QEMU NVMe Ctrl 00:32:39.768 Firmware Version: 8.0.0 00:32:39.768 Recommended Arb Burst: 6 00:32:39.768 IEEE OUI Identifier: 00 54 52 00:32:39.768 Multi-path I/O 00:32:39.768 May have multiple subsystem ports: No 00:32:39.768 May have multiple controllers: Yes 00:32:39.768 Associated with SR-IOV VF: No 00:32:39.768 Max Data Transfer Size: 524288 00:32:39.768 Max Number of Namespaces: 256 00:32:39.769 Max Number of I/O Queues: 64 00:32:39.769 NVMe Specification Version (VS): 1.4 00:32:39.769 NVMe Specification Version (Identify): 1.4 00:32:39.769 Maximum Queue Entries: 2048 00:32:39.769 Contiguous Queues Required: Yes 00:32:39.769 Arbitration Mechanisms Supported 00:32:39.769 Weighted Round Robin: Not Supported 00:32:39.769 Vendor Specific: Not Supported 00:32:39.769 Reset Timeout: 7500 ms 00:32:39.769 Doorbell Stride: 4 bytes 00:32:39.769 NVM Subsystem Reset: Not Supported 00:32:39.769 Command Sets Supported 00:32:39.769 NVM Command Set: Supported 00:32:39.769 Boot Partition: Not Supported 00:32:39.769 Memory Page Size Minimum: 4096 bytes 00:32:39.769 Memory Page Size Maximum: 65536 bytes 00:32:39.769 Persistent Memory Region: Not Supported 00:32:39.769 Optional Asynchronous Events Supported 00:32:39.769 Namespace Attribute Notices: Supported 00:32:39.769 Firmware Activation Notices: Not Supported 00:32:39.769 ANA Change Notices: Not Supported 00:32:39.769 PLE Aggregate Log Change Notices: Not Supported 00:32:39.769 LBA Status Info Alert Notices: Not Supported 00:32:39.769 EGE Aggregate Log Change Notices: Not Supported 00:32:39.769 Normal NVM Subsystem Shutdown event: Not Supported 00:32:39.769 Zone Descriptor Change Notices: Not Supported 00:32:39.769 Discovery Log Change Notices: Not Supported 00:32:39.769 Controller Attributes 00:32:39.769 128-bit Host Identifier: Not Supported 00:32:39.769 Non-Operational Permissive Mode: Not Supported 00:32:39.769 NVM Sets: Not Supported 00:32:39.769 Read Recovery Levels: Not Supported 00:32:39.769 Endurance Groups: Supported 00:32:39.769 Predictable Latency Mode: Not Supported 00:32:39.769 Traffic Based Keep ALive: Not Supported 00:32:39.769 Namespace Granularity: Not Supported 00:32:39.769 SQ Associations: Not Supported 00:32:39.769 UUID List: Not Supported 00:32:39.769 Multi-Domain Subsystem: Not Supported 00:32:39.769 Fixed Capacity Management: Not Supported 00:32:39.769 Variable Capacity Management: Not Supported 00:32:39.769 Delete Endurance Group: Not Supported 00:32:39.769 Delete NVM Set: Not Supported 00:32:39.769 Extended LBA Formats Supported: Supported 00:32:39.769 Flexible Data Placement Supported: Supported 00:32:39.769 00:32:39.769 Controller Memory Buffer Support 00:32:39.769 ================================ 00:32:39.769 Supported: No 00:32:39.769 00:32:39.769 Persistent Memory Region Support 00:32:39.769 ================================ 00:32:39.769 Supported: No 00:32:39.769 00:32:39.769 Admin Command Set Attributes 00:32:39.769 ============================ 00:32:39.769 Security Send/Receive: Not Supported 00:32:39.769 Format NVM: Supported 00:32:39.769 Firmware Activate/Download: Not Supported 00:32:39.769 Namespace Management: Supported 00:32:39.769 Device Self-Test: Not Supported 00:32:39.769 Directives: Supported 00:32:39.769 NVMe-MI: Not Supported 00:32:39.769 Virtualization Management: Not Supported 00:32:39.769 Doorbell Buffer Config: Supported 00:32:39.769 Get LBA Status Capability: Not Supported 00:32:39.769 Command & Feature Lockdown Capability: Not Supported 00:32:39.769 Abort Command Limit: 4 00:32:39.769 Async Event Request Limit: 4 00:32:39.769 Number of Firmware Slots: N/A 00:32:39.769 Firmware Slot 1 Read-Only: N/A 00:32:39.769 Firmware Activation Without Reset: N/A 00:32:39.769 Multiple Update Detection Support: N/A 00:32:39.769 Firmware Update Granularity: No Information Provided 00:32:39.769 Per-Namespace SMART Log: Yes 00:32:39.769 Asymmetric Namespace Access Log Page: Not Supported 00:32:39.769 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:32:39.769 Command Effects Log Page: Supported 00:32:39.769 Get Log Page Extended Data: Supported 00:32:39.769 Telemetry Log Pages: Not Supported 00:32:39.769 Persistent Event Log Pages: Not Supported 00:32:39.769 Supported Log Pages Log Page: May Support 00:32:39.769 Commands Supported & Effects Log Page: Not Supported 00:32:39.769 Feature Identifiers & Effects Log Page:May Support 00:32:39.769 NVMe-MI Commands & Effects Log Page: May Support 00:32:39.769 Data Area 4 for Telemetry Log: Not Supported 00:32:39.769 Error Log Page Entries Supported: 1 00:32:39.769 Keep Alive: Not Supported 00:32:39.769 00:32:39.769 NVM Command Set Attributes 00:32:39.769 ========================== 00:32:39.769 Submission Queue Entry Size 00:32:39.769 Max: 64 00:32:39.769 Min: 64 00:32:39.769 Completion Queue Entry Size 00:32:39.769 Max: 16 00:32:39.769 Min: 16 00:32:39.769 Number of Namespaces: 256 00:32:39.769 Compare Command: Supported 00:32:39.769 Write Uncorrectable Command: Not Supported 00:32:39.769 Dataset Management Command: Supported 00:32:39.769 Write Zeroes Command: Supported 00:32:39.769 Set Features Save Field: Supported 00:32:39.769 Reservations: Not Supported 00:32:39.769 Timestamp: Supported 00:32:39.769 Copy: Supported 00:32:39.769 Volatile Write Cache: Present 00:32:39.769 Atomic Write Unit (Normal): 1 00:32:39.769 Atomic Write Unit (PFail): 1 00:32:39.769 Atomic Compare & Write Unit: 1 00:32:39.769 Fused Compare & Write: Not Supported 00:32:39.769 Scatter-Gather List 00:32:39.769 SGL Command Set: Supported 00:32:39.769 SGL Keyed: Not Supported 00:32:39.769 SGL Bit Bucket Descriptor: Not Supported 00:32:39.769 SGL Metadata Pointer: Not Supported 00:32:39.769 Oversized SGL: Not Supported 00:32:39.769 SGL Metadata Address: Not Supported 00:32:39.769 SGL Offset: Not Supported 00:32:39.769 Transport SGL Data Block: Not Supported 00:32:39.769 Replay Protected Memory Block: Not Supported 00:32:39.769 00:32:39.769 Firmware Slot Information 00:32:39.769 ========================= 00:32:39.769 Active slot: 1 00:32:39.769 Slot 1 Firmware Revision: 1.0 00:32:39.769 00:32:39.769 00:32:39.769 Commands Supported and Effects 00:32:39.769 ============================== 00:32:39.769 Admin Commands 00:32:39.769 -------------- 00:32:39.769 Delete I/O Submission Queue (00h): Supported 00:32:39.769 Create I/O Submission Queue (01h): Supported 00:32:39.769 Get Log Page (02h): Supported 00:32:39.769 Delete I/O Completion Queue (04h): Supported 00:32:39.769 Create I/O Completion Queue (05h): Supported 00:32:39.769 Identify (06h): Supported 00:32:39.769 Abort (08h): Supported 00:32:39.769 Set Features (09h): Supported 00:32:39.769 Get Features (0Ah): Supported 00:32:39.769 Asynchronous Event Request (0Ch): Supported 00:32:39.769 Namespace Attachment (15h): Supported NS-Inventory-Change 00:32:39.769 Directive Send (19h): Supported 00:32:39.769 Directive Receive (1Ah): Supported 00:32:39.769 Virtualization Management (1Ch): Supported 00:32:39.769 Doorbell Buffer Config (7Ch): Supported 00:32:39.769 Format NVM (80h): Supported LBA-Change 00:32:39.769 I/O Commands 00:32:39.769 ------------ 00:32:39.769 Flush (00h): Supported LBA-Change 00:32:39.769 Write (01h): Supported LBA-Change 00:32:39.769 Read (02h): Supported 00:32:39.769 Compare (05h): Supported 00:32:39.769 Write Zeroes (08h): Supported LBA-Change 00:32:39.769 Dataset Management (09h): Supported LBA-Change 00:32:39.769 Unknown (0Ch): Supported 00:32:39.769 Unknown (12h): Supported 00:32:39.769 Copy (19h): Supported LBA-Change 00:32:39.769 Unknown (1Dh): Supported LBA-Change 00:32:39.769 00:32:39.769 Error Log 00:32:39.769 ========= 00:32:39.769 00:32:39.769 Arbitration 00:32:39.769 =========== 00:32:39.769 Arbitration Burst: no limit 00:32:39.769 00:32:39.769 Power Management 00:32:39.769 ================ 00:32:39.769 Number of Power States: 1 00:32:39.769 Current Power State: Power State #0 00:32:39.769 Power State #0: 00:32:39.769 Max Power: 25.00 W 00:32:39.769 Non-Operational State: Operational 00:32:39.769 Entry Latency: 16 microseconds 00:32:39.769 Exit Latency: 4 microseconds 00:32:39.769 Relative Read Throughput: 0 00:32:39.769 Relative Read Latency: 0 00:32:39.769 Relative Write Throughput: 0 00:32:39.769 Relative Write Latency: 0 00:32:39.769 Idle Power: Not Reported 00:32:39.769 Active Power: Not Reported 00:32:39.769 Non-Operational Permissive Mode: Not Supported 00:32:39.769 00:32:39.769 Health Information 00:32:39.769 ================== 00:32:39.769 Critical Warnings: 00:32:39.769 Available Spare Space: OK 00:32:39.769 Temperature: OK 00:32:39.769 Device Reliability: OK 00:32:39.769 Read Only: No 00:32:39.769 Volatile Memory Backup: OK 00:32:39.769 Current Temperature: 323 Kelvin (50 Celsius) 00:32:39.769 Temperature Threshold: 343 Kelvin (70 Celsius) 00:32:39.769 Available Spare: 0% 00:32:39.769 Available Spare Threshold: 0% 00:32:39.769 Life Percentage Used: 0% 00:32:39.770 Data Units Read: 747 00:32:39.770 Data Units Written: 676 00:32:39.770 Host Read Commands: 31891 00:32:39.770 Host Write Commands: 31314 00:32:39.770 Controller Busy Time: 0 minutes 00:32:39.770 Power Cycles: 0 00:32:39.770 Power On Hours: 0 hours 00:32:39.770 Unsafe Shutdowns: 0 00:32:39.770 Unrecoverable Media Errors: 0 00:32:39.770 Lifetime Error Log Entries: 0 00:32:39.770 Warning Temperature Time: 0 minutes 00:32:39.770 Critical Temperature Time: 0 minutes 00:32:39.770 00:32:39.770 Number of Queues 00:32:39.770 ================ 00:32:39.770 Number of I/O Submission Queues: 64 00:32:39.770 Number of I/O Completion Queues: 64 00:32:39.770 00:32:39.770 ZNS Specific Controller Data 00:32:39.770 ============================ 00:32:39.770 Zone Append Size Limit: 0 00:32:39.770 00:32:39.770 00:32:39.770 Active Namespaces 00:32:39.770 ================= 00:32:39.770 Namespace ID:1 00:32:39.770 Error Recovery Timeout: Unlimited 00:32:39.770 Command Set Identifier: NVM (00h) 00:32:39.770 Deallocate: Supported 00:32:39.770 Deallocated/Unwritten Error: Supported 00:32:39.770 Deallocated Read Value: All 0x00 00:32:39.770 Deallocate in Write Zeroes: Not Supported 00:32:39.770 Deallocated Guard Field: 0xFFFF 00:32:39.770 Flush: Supported 00:32:39.770 Reservation: Not Supported 00:32:39.770 Namespace Sharing Capabilities: Multiple Controllers 00:32:39.770 Size (in LBAs): 262144 (1GiB) 00:32:39.770 Capacity (in LBAs): 262144 (1GiB) 00:32:39.770 Utilization (in LBAs): 262144 (1GiB) 00:32:39.770 Thin Provisioning: Not Supported 00:32:39.770 Per-NS Atomic Units: No 00:32:39.770 Maximum Single Source Range Length: 128 00:32:39.770 Maximum Copy Length: 128 00:32:39.770 Maximum Source Range Count: 128 00:32:39.770 NGUID/EUI64 Never Reused: No 00:32:39.770 Namespace Write Protected: No 00:32:39.770 Endurance group ID: 1 00:32:39.770 Number of LBA Formats: 8 00:32:39.770 Current LBA Format: LBA Format #04 00:32:39.770 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:39.770 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:39.770 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:39.770 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:39.770 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:39.770 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:39.770 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:39.770 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:39.770 00:32:39.770 Get Feature FDP: 00:32:39.770 ================ 00:32:39.770 Enabled: Yes 00:32:39.770 FDP configuration index: 0 00:32:39.770 00:32:39.770 FDP configurations log page 00:32:39.770 =========================== 00:32:39.770 Number of FDP configurations: 1 00:32:39.770 Version: 0 00:32:39.770 Size: 112 00:32:39.770 FDP Configuration Descriptor: 0 00:32:39.770 Descriptor Size: 96 00:32:39.770 Reclaim Group Identifier format: 2 00:32:39.770 FDP Volatile Write Cache: Not Present 00:32:39.770 FDP Configuration: Valid 00:32:39.770 Vendor Specific Size: 0 00:32:39.770 Number of Reclaim Groups: 2 00:32:39.770 Number of Recalim Unit Handles: 8 00:32:39.770 Max Placement Identifiers: 128 00:32:39.770 Number of Namespaces Suppprted: 256 00:32:39.770 Reclaim unit Nominal Size: 6000000 bytes 00:32:39.770 Estimated Reclaim Unit Time Limit: Not Reported 00:32:39.770 RUH Desc #000: RUH Type: Initially Isolated 00:32:39.770 RUH Desc #001: RUH Type: Initially Isolated 00:32:39.770 RUH Desc #002: RUH Type: Initially Isolated 00:32:39.770 RUH Desc #003: RUH Type: Initially Isolated 00:32:39.770 RUH Desc #004: RUH Type: Initially Isolated 00:32:39.770 RUH Desc #005: RUH Type: Initially Isolated 00:32:39.770 RUH Desc #006: RUH Type: Initially Isolated 00:32:39.770 RUH Desc #007: RUH Type: Initially Isolated 00:32:39.770 00:32:39.770 FDP reclaim unit handle usage log page 00:32:39.770 ====================================== 00:32:39.770 Number of Reclaim Unit Handles: 8 00:32:39.770 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:32:39.770 RUH Usage Desc #001: RUH Attributes: Unused 00:32:39.770 RUH Usage Desc #002: RUH Attributes: Unused 00:32:39.770 RUH Usage Desc #003: RUH Attributes: Unused 00:32:39.770 RUH Usage Desc #004: RUH Attributes: Unused 00:32:39.770 RUH Usage Desc #005: RUH Attributes: Unused 00:32:39.770 RUH Usage Desc #006: RUH Attributes: Unused 00:32:39.770 RUH Usage Desc #007: RUH Attributes: Unused 00:32:39.770 00:32:39.770 FDP statistics log page 00:32:39.770 ======================= 00:32:39.770 Host bytes with metadata written: 428318720 00:32:39.770 Media bytes with metadata written: 428363776 00:32:39.770 Media bytes erased: 0 00:32:39.770 00:32:39.770 FDP events log page 00:32:39.770 =================== 00:32:39.770 Number of FDP events: 0 00:32:39.770 00:32:39.770 NVM Specific Namespace Data 00:32:39.770 =========================== 00:32:39.770 Logical Block Storage Tag Mask: 0 00:32:39.770 Protection Information Capabilities: 00:32:39.770 16b Guard Protection Information Storage Tag Support: No 00:32:39.770 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:32:39.770 Storage Tag Check Read Support: No 00:32:39.770 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.770 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.770 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.770 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.770 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.770 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.770 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.770 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:32:39.770 00:32:39.770 real 0m2.003s 00:32:39.770 user 0m0.755s 00:32:39.770 sys 0m1.013s 00:32:39.770 13:53:37 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.770 13:53:37 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:32:39.770 ************************************ 00:32:39.770 END TEST nvme_identify 00:32:39.770 ************************************ 00:32:40.029 13:53:37 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:32:40.029 13:53:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:40.029 13:53:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:40.029 13:53:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:40.029 ************************************ 00:32:40.029 START TEST nvme_perf 00:32:40.029 ************************************ 00:32:40.029 13:53:37 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:32:40.029 13:53:37 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:32:41.410 Initializing NVMe Controllers 00:32:41.410 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:41.410 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:32:41.410 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:32:41.410 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:32:41.410 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:32:41.410 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:32:41.410 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:32:41.410 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:32:41.410 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:32:41.410 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:32:41.410 Initialization complete. Launching workers. 00:32:41.410 ======================================================== 00:32:41.410 Latency(us) 00:32:41.411 Device Information : IOPS MiB/s Average min max 00:32:41.411 PCIE (0000:00:10.0) NSID 1 from core 0: 12694.55 148.76 10105.30 8267.66 45807.22 00:32:41.411 PCIE (0000:00:11.0) NSID 1 from core 0: 12694.55 148.76 10084.26 8365.95 43336.81 00:32:41.411 PCIE (0000:00:13.0) NSID 1 from core 0: 12694.55 148.76 10061.78 8332.58 41196.63 00:32:41.411 PCIE (0000:00:12.0) NSID 1 from core 0: 12694.55 148.76 10040.08 8330.24 38666.55 00:32:41.411 PCIE (0000:00:12.0) NSID 2 from core 0: 12694.55 148.76 10019.91 8326.60 36125.63 00:32:41.411 PCIE (0000:00:12.0) NSID 3 from core 0: 12758.34 149.51 9949.04 8329.62 29020.81 00:32:41.411 ======================================================== 00:32:41.411 Total : 76231.11 893.33 10043.32 8267.66 45807.22 00:32:41.411 00:32:41.411 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:32:41.411 ================================================================================= 00:32:41.411 1.00000% : 8488.472us 00:32:41.411 10.00000% : 8738.133us 00:32:41.411 25.00000% : 8987.794us 00:32:41.411 50.00000% : 9487.116us 00:32:41.411 75.00000% : 10548.175us 00:32:41.411 90.00000% : 11172.328us 00:32:41.411 95.00000% : 11796.480us 00:32:41.411 98.00000% : 15541.394us 00:32:41.411 99.00000% : 37698.804us 00:32:41.411 99.50000% : 43690.667us 00:32:41.411 99.90000% : 45438.293us 00:32:41.411 99.99000% : 45937.615us 00:32:41.411 99.99900% : 45937.615us 00:32:41.411 99.99990% : 45937.615us 00:32:41.411 99.99999% : 45937.615us 00:32:41.411 00:32:41.411 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:32:41.411 ================================================================================= 00:32:41.411 1.00000% : 8550.888us 00:32:41.411 10.00000% : 8800.549us 00:32:41.411 25.00000% : 9050.210us 00:32:41.411 50.00000% : 9424.701us 00:32:41.411 75.00000% : 10548.175us 00:32:41.411 90.00000% : 11234.743us 00:32:41.411 95.00000% : 11734.065us 00:32:41.411 98.00000% : 15791.055us 00:32:41.411 99.00000% : 34952.533us 00:32:41.411 99.50000% : 41194.057us 00:32:41.411 99.90000% : 42941.684us 00:32:41.411 99.99000% : 43441.006us 00:32:41.411 99.99900% : 43441.006us 00:32:41.411 99.99990% : 43441.006us 00:32:41.411 99.99999% : 43441.006us 00:32:41.411 00:32:41.411 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:32:41.411 ================================================================================= 00:32:41.411 1.00000% : 8550.888us 00:32:41.411 10.00000% : 8800.549us 00:32:41.411 25.00000% : 9050.210us 00:32:41.411 50.00000% : 9424.701us 00:32:41.411 75.00000% : 10548.175us 00:32:41.411 90.00000% : 11172.328us 00:32:41.411 95.00000% : 11796.480us 00:32:41.411 98.00000% : 14168.259us 00:32:41.411 99.00000% : 32955.246us 00:32:41.411 99.50000% : 39196.770us 00:32:41.411 99.90000% : 40944.396us 00:32:41.411 99.99000% : 41194.057us 00:32:41.411 99.99900% : 41443.718us 00:32:41.411 99.99990% : 41443.718us 00:32:41.411 99.99999% : 41443.718us 00:32:41.411 00:32:41.411 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:32:41.411 ================================================================================= 00:32:41.411 1.00000% : 8550.888us 00:32:41.411 10.00000% : 8800.549us 00:32:41.411 25.00000% : 9050.210us 00:32:41.411 50.00000% : 9424.701us 00:32:41.411 75.00000% : 10548.175us 00:32:41.411 90.00000% : 11172.328us 00:32:41.411 95.00000% : 11796.480us 00:32:41.411 98.00000% : 14043.429us 00:32:41.411 99.00000% : 30458.636us 00:32:41.411 99.50000% : 36700.160us 00:32:41.411 99.90000% : 38447.787us 00:32:41.411 99.99000% : 38697.448us 00:32:41.411 99.99900% : 38697.448us 00:32:41.411 99.99990% : 38697.448us 00:32:41.411 99.99999% : 38697.448us 00:32:41.411 00:32:41.411 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:32:41.411 ================================================================================= 00:32:41.411 1.00000% : 8550.888us 00:32:41.411 10.00000% : 8800.549us 00:32:41.411 25.00000% : 9050.210us 00:32:41.411 50.00000% : 9424.701us 00:32:41.411 75.00000% : 10548.175us 00:32:41.411 90.00000% : 11172.328us 00:32:41.411 95.00000% : 11796.480us 00:32:41.411 98.00000% : 14792.411us 00:32:41.411 99.00000% : 27962.027us 00:32:41.411 99.50000% : 34203.550us 00:32:41.411 99.90000% : 35951.177us 00:32:41.411 99.99000% : 36200.838us 00:32:41.411 99.99900% : 36200.838us 00:32:41.411 99.99990% : 36200.838us 00:32:41.411 99.99999% : 36200.838us 00:32:41.411 00:32:41.411 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:32:41.411 ================================================================================= 00:32:41.411 1.00000% : 8550.888us 00:32:41.411 10.00000% : 8800.549us 00:32:41.411 25.00000% : 9050.210us 00:32:41.411 50.00000% : 9424.701us 00:32:41.411 75.00000% : 10548.175us 00:32:41.411 90.00000% : 11234.743us 00:32:41.411 95.00000% : 11796.480us 00:32:41.411 98.00000% : 13544.107us 00:32:41.411 99.00000% : 18599.741us 00:32:41.411 99.50000% : 23093.638us 00:32:41.411 99.90000% : 28711.010us 00:32:41.411 99.99000% : 29085.501us 00:32:41.411 99.99900% : 29085.501us 00:32:41.411 99.99990% : 29085.501us 00:32:41.411 99.99999% : 29085.501us 00:32:41.411 00:32:41.411 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:32:41.411 ============================================================================== 00:32:41.411 Range in us Cumulative IO count 00:32:41.411 8238.811 - 8301.227: 0.0785% ( 10) 00:32:41.411 8301.227 - 8363.642: 0.2827% ( 26) 00:32:41.411 8363.642 - 8426.057: 0.7145% ( 55) 00:32:41.411 8426.057 - 8488.472: 1.8059% ( 139) 00:32:41.411 8488.472 - 8550.888: 3.5097% ( 217) 00:32:41.411 8550.888 - 8613.303: 5.8967% ( 304) 00:32:41.411 8613.303 - 8675.718: 8.7861% ( 368) 00:32:41.411 8675.718 - 8738.133: 11.9033% ( 397) 00:32:41.411 8738.133 - 8800.549: 15.1696% ( 416) 00:32:41.411 8800.549 - 8862.964: 18.6558% ( 444) 00:32:41.411 8862.964 - 8925.379: 21.9614% ( 421) 00:32:41.411 8925.379 - 8987.794: 25.4476% ( 444) 00:32:41.411 8987.794 - 9050.210: 28.8003% ( 427) 00:32:41.411 9050.210 - 9112.625: 32.2158% ( 435) 00:32:41.411 9112.625 - 9175.040: 35.6784% ( 441) 00:32:41.411 9175.040 - 9237.455: 39.1175% ( 438) 00:32:41.411 9237.455 - 9299.870: 42.5565% ( 438) 00:32:41.411 9299.870 - 9362.286: 45.9720% ( 435) 00:32:41.411 9362.286 - 9424.701: 49.3483% ( 430) 00:32:41.411 9424.701 - 9487.116: 52.3634% ( 384) 00:32:41.411 9487.116 - 9549.531: 55.2293% ( 365) 00:32:41.411 9549.531 - 9611.947: 57.5377% ( 294) 00:32:41.411 9611.947 - 9674.362: 59.3750% ( 234) 00:32:41.411 9674.362 - 9736.777: 60.6391% ( 161) 00:32:41.411 9736.777 - 9799.192: 61.8169% ( 150) 00:32:41.411 9799.192 - 9861.608: 62.7198% ( 115) 00:32:41.411 9861.608 - 9924.023: 63.6542% ( 119) 00:32:41.411 9924.023 - 9986.438: 64.5964% ( 120) 00:32:41.411 9986.438 - 10048.853: 65.6329% ( 132) 00:32:41.411 10048.853 - 10111.269: 66.8420% ( 154) 00:32:41.411 10111.269 - 10173.684: 67.9727% ( 144) 00:32:41.411 10173.684 - 10236.099: 69.1583% ( 151) 00:32:41.411 10236.099 - 10298.514: 70.3832% ( 156) 00:32:41.411 10298.514 - 10360.930: 71.7965% ( 180) 00:32:41.411 10360.930 - 10423.345: 73.2491% ( 185) 00:32:41.411 10423.345 - 10485.760: 74.5839% ( 170) 00:32:41.411 10485.760 - 10548.175: 76.0207% ( 183) 00:32:41.411 10548.175 - 10610.590: 77.4497% ( 182) 00:32:41.411 10610.590 - 10673.006: 78.9337% ( 189) 00:32:41.411 10673.006 - 10735.421: 80.3785% ( 184) 00:32:41.411 10735.421 - 10797.836: 81.9959% ( 206) 00:32:41.411 10797.836 - 10860.251: 83.6840% ( 215) 00:32:41.411 10860.251 - 10922.667: 85.2622% ( 201) 00:32:41.411 10922.667 - 10985.082: 86.7070% ( 184) 00:32:41.411 10985.082 - 11047.497: 87.9947% ( 164) 00:32:41.411 11047.497 - 11109.912: 89.0468% ( 134) 00:32:41.411 11109.912 - 11172.328: 90.0126% ( 123) 00:32:41.411 11172.328 - 11234.743: 90.8134% ( 102) 00:32:41.411 11234.743 - 11297.158: 91.5122% ( 89) 00:32:41.411 11297.158 - 11359.573: 92.1718% ( 84) 00:32:41.411 11359.573 - 11421.989: 92.7057% ( 68) 00:32:41.411 11421.989 - 11484.404: 93.1925% ( 62) 00:32:41.411 11484.404 - 11546.819: 93.6322% ( 56) 00:32:41.411 11546.819 - 11609.234: 94.0641% ( 55) 00:32:41.411 11609.234 - 11671.650: 94.5116% ( 57) 00:32:41.411 11671.650 - 11734.065: 94.9121% ( 51) 00:32:41.411 11734.065 - 11796.480: 95.2732% ( 46) 00:32:41.411 11796.480 - 11858.895: 95.5873% ( 40) 00:32:41.411 11858.895 - 11921.310: 95.8543% ( 34) 00:32:41.411 11921.310 - 11983.726: 96.0898% ( 30) 00:32:41.411 11983.726 - 12046.141: 96.3175% ( 29) 00:32:41.411 12046.141 - 12108.556: 96.5845% ( 34) 00:32:41.411 12108.556 - 12170.971: 96.8122% ( 29) 00:32:41.411 12170.971 - 12233.387: 96.9535% ( 18) 00:32:41.411 12233.387 - 12295.802: 97.0556% ( 13) 00:32:41.411 12295.802 - 12358.217: 97.1577% ( 13) 00:32:41.411 12358.217 - 12420.632: 97.2440% ( 11) 00:32:41.411 12420.632 - 12483.048: 97.2990% ( 7) 00:32:41.411 12483.048 - 12545.463: 97.3854% ( 11) 00:32:41.411 12545.463 - 12607.878: 97.4639% ( 10) 00:32:41.411 12607.878 - 12670.293: 97.5188% ( 7) 00:32:41.411 12670.293 - 12732.709: 97.6052% ( 11) 00:32:41.412 12732.709 - 12795.124: 97.6445% ( 5) 00:32:41.412 12795.124 - 12857.539: 97.6994% ( 7) 00:32:41.412 12857.539 - 12919.954: 97.7544% ( 7) 00:32:41.412 12919.954 - 12982.370: 97.8015% ( 6) 00:32:41.412 12982.370 - 13044.785: 97.8565% ( 7) 00:32:41.412 13044.785 - 13107.200: 97.9036% ( 6) 00:32:41.412 13107.200 - 13169.615: 97.9271% ( 3) 00:32:41.412 13169.615 - 13232.030: 97.9350% ( 1) 00:32:41.412 13232.030 - 13294.446: 97.9664% ( 4) 00:32:41.412 13294.446 - 13356.861: 97.9899% ( 3) 00:32:41.412 15478.979 - 15541.394: 98.0292% ( 5) 00:32:41.412 15541.394 - 15603.810: 98.0449% ( 2) 00:32:41.412 15603.810 - 15666.225: 98.0842% ( 5) 00:32:41.412 15666.225 - 15728.640: 98.0999% ( 2) 00:32:41.412 15728.640 - 15791.055: 98.1313% ( 4) 00:32:41.412 15791.055 - 15853.470: 98.1627% ( 4) 00:32:41.412 15853.470 - 15915.886: 98.1941% ( 4) 00:32:41.412 15915.886 - 15978.301: 98.2255% ( 4) 00:32:41.412 15978.301 - 16103.131: 98.2648% ( 5) 00:32:41.412 16103.131 - 16227.962: 98.3276% ( 8) 00:32:41.412 16227.962 - 16352.792: 98.3825% ( 7) 00:32:41.412 16352.792 - 16477.623: 98.4375% ( 7) 00:32:41.412 16477.623 - 16602.453: 98.5239% ( 11) 00:32:41.412 16602.453 - 16727.284: 98.5788% ( 7) 00:32:41.412 16727.284 - 16852.114: 98.6259% ( 6) 00:32:41.412 16852.114 - 16976.945: 98.6731% ( 6) 00:32:41.412 16976.945 - 17101.775: 98.7280% ( 7) 00:32:41.412 17101.775 - 17226.606: 98.7751% ( 6) 00:32:41.412 17226.606 - 17351.436: 98.8222% ( 6) 00:32:41.412 17351.436 - 17476.267: 98.8772% ( 7) 00:32:41.412 17476.267 - 17601.097: 98.9322% ( 7) 00:32:41.412 17601.097 - 17725.928: 98.9793% ( 6) 00:32:41.412 17725.928 - 17850.758: 98.9950% ( 2) 00:32:41.412 37449.143 - 37698.804: 99.0421% ( 6) 00:32:41.412 37698.804 - 37948.465: 99.0892% ( 6) 00:32:41.412 37948.465 - 38198.126: 99.1442% ( 7) 00:32:41.412 38198.126 - 38447.787: 99.1913% ( 6) 00:32:41.412 38447.787 - 38697.448: 99.2462% ( 7) 00:32:41.412 38697.448 - 38947.109: 99.3169% ( 9) 00:32:41.412 38947.109 - 39196.770: 99.3719% ( 7) 00:32:41.412 39196.770 - 39446.430: 99.4268% ( 7) 00:32:41.412 39446.430 - 39696.091: 99.4818% ( 7) 00:32:41.412 39696.091 - 39945.752: 99.4975% ( 2) 00:32:41.412 43441.006 - 43690.667: 99.5053% ( 1) 00:32:41.412 43690.667 - 43940.328: 99.5524% ( 6) 00:32:41.412 43940.328 - 44189.989: 99.6153% ( 8) 00:32:41.412 44189.989 - 44439.650: 99.6781% ( 8) 00:32:41.412 44439.650 - 44689.310: 99.7330% ( 7) 00:32:41.412 44689.310 - 44938.971: 99.7880% ( 7) 00:32:41.412 44938.971 - 45188.632: 99.8508% ( 8) 00:32:41.412 45188.632 - 45438.293: 99.9136% ( 8) 00:32:41.412 45438.293 - 45687.954: 99.9843% ( 9) 00:32:41.412 45687.954 - 45937.615: 100.0000% ( 2) 00:32:41.412 00:32:41.412 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:32:41.412 ============================================================================== 00:32:41.412 Range in us Cumulative IO count 00:32:41.412 8363.642 - 8426.057: 0.2120% ( 27) 00:32:41.412 8426.057 - 8488.472: 0.5418% ( 42) 00:32:41.412 8488.472 - 8550.888: 1.4683% ( 118) 00:32:41.412 8550.888 - 8613.303: 3.0465% ( 201) 00:32:41.412 8613.303 - 8675.718: 5.5905% ( 324) 00:32:41.412 8675.718 - 8738.133: 8.5113% ( 372) 00:32:41.412 8738.133 - 8800.549: 11.8719% ( 428) 00:32:41.412 8800.549 - 8862.964: 15.8684% ( 509) 00:32:41.412 8862.964 - 8925.379: 19.8492% ( 507) 00:32:41.412 8925.379 - 8987.794: 24.0028% ( 529) 00:32:41.412 8987.794 - 9050.210: 28.1093% ( 523) 00:32:41.412 9050.210 - 9112.625: 32.1844% ( 519) 00:32:41.412 9112.625 - 9175.040: 36.3615% ( 532) 00:32:41.412 9175.040 - 9237.455: 40.4287% ( 518) 00:32:41.412 9237.455 - 9299.870: 44.3703% ( 502) 00:32:41.412 9299.870 - 9362.286: 48.3276% ( 504) 00:32:41.412 9362.286 - 9424.701: 52.0336% ( 472) 00:32:41.412 9424.701 - 9487.116: 55.1272% ( 394) 00:32:41.412 9487.116 - 9549.531: 57.5848% ( 313) 00:32:41.412 9549.531 - 9611.947: 59.4378% ( 236) 00:32:41.412 9611.947 - 9674.362: 60.7334% ( 165) 00:32:41.412 9674.362 - 9736.777: 61.5264% ( 101) 00:32:41.412 9736.777 - 9799.192: 62.0446% ( 66) 00:32:41.412 9799.192 - 9861.608: 62.4215% ( 48) 00:32:41.412 9861.608 - 9924.023: 62.8769% ( 58) 00:32:41.412 9924.023 - 9986.438: 63.3637% ( 62) 00:32:41.412 9986.438 - 10048.853: 64.2431% ( 112) 00:32:41.412 10048.853 - 10111.269: 65.3031% ( 135) 00:32:41.412 10111.269 - 10173.684: 66.5672% ( 161) 00:32:41.412 10173.684 - 10236.099: 67.9413% ( 175) 00:32:41.412 10236.099 - 10298.514: 69.2918% ( 172) 00:32:41.412 10298.514 - 10360.930: 70.6972% ( 179) 00:32:41.412 10360.930 - 10423.345: 72.2833% ( 202) 00:32:41.412 10423.345 - 10485.760: 73.9714% ( 215) 00:32:41.412 10485.760 - 10548.175: 75.6595% ( 215) 00:32:41.412 10548.175 - 10610.590: 77.4497% ( 228) 00:32:41.412 10610.590 - 10673.006: 79.1222% ( 213) 00:32:41.412 10673.006 - 10735.421: 80.8103% ( 215) 00:32:41.412 10735.421 - 10797.836: 82.5141% ( 217) 00:32:41.412 10797.836 - 10860.251: 84.0845% ( 200) 00:32:41.412 10860.251 - 10922.667: 85.6156% ( 195) 00:32:41.412 10922.667 - 10985.082: 86.8876% ( 162) 00:32:41.412 10985.082 - 11047.497: 88.0810% ( 152) 00:32:41.412 11047.497 - 11109.912: 89.0468% ( 123) 00:32:41.412 11109.912 - 11172.328: 89.9576% ( 116) 00:32:41.412 11172.328 - 11234.743: 90.7428% ( 100) 00:32:41.412 11234.743 - 11297.158: 91.5044% ( 97) 00:32:41.412 11297.158 - 11359.573: 92.1796% ( 86) 00:32:41.412 11359.573 - 11421.989: 92.7371% ( 71) 00:32:41.412 11421.989 - 11484.404: 93.2710% ( 68) 00:32:41.412 11484.404 - 11546.819: 93.7657% ( 63) 00:32:41.412 11546.819 - 11609.234: 94.2447% ( 61) 00:32:41.412 11609.234 - 11671.650: 94.6844% ( 56) 00:32:41.412 11671.650 - 11734.065: 95.0455% ( 46) 00:32:41.412 11734.065 - 11796.480: 95.4224% ( 48) 00:32:41.412 11796.480 - 11858.895: 95.7208% ( 38) 00:32:41.412 11858.895 - 11921.310: 95.9956% ( 35) 00:32:41.412 11921.310 - 11983.726: 96.2233% ( 29) 00:32:41.412 11983.726 - 12046.141: 96.4274% ( 26) 00:32:41.412 12046.141 - 12108.556: 96.6159% ( 24) 00:32:41.412 12108.556 - 12170.971: 96.7258% ( 14) 00:32:41.412 12170.971 - 12233.387: 96.8593% ( 17) 00:32:41.412 12233.387 - 12295.802: 96.9692% ( 14) 00:32:41.412 12295.802 - 12358.217: 97.0791% ( 14) 00:32:41.412 12358.217 - 12420.632: 97.1891% ( 14) 00:32:41.412 12420.632 - 12483.048: 97.3068% ( 15) 00:32:41.412 12483.048 - 12545.463: 97.4168% ( 14) 00:32:41.412 12545.463 - 12607.878: 97.5188% ( 13) 00:32:41.412 12607.878 - 12670.293: 97.5974% ( 10) 00:32:41.412 12670.293 - 12732.709: 97.6680% ( 9) 00:32:41.412 12732.709 - 12795.124: 97.7308% ( 8) 00:32:41.412 12795.124 - 12857.539: 97.8015% ( 9) 00:32:41.412 12857.539 - 12919.954: 97.8643% ( 8) 00:32:41.412 12919.954 - 12982.370: 97.9271% ( 8) 00:32:41.412 12982.370 - 13044.785: 97.9821% ( 7) 00:32:41.412 13044.785 - 13107.200: 97.9899% ( 1) 00:32:41.412 15728.640 - 15791.055: 98.0135% ( 3) 00:32:41.412 15791.055 - 15853.470: 98.0763% ( 8) 00:32:41.412 15853.470 - 15915.886: 98.1313% ( 7) 00:32:41.412 15915.886 - 15978.301: 98.2019% ( 9) 00:32:41.412 15978.301 - 16103.131: 98.3433% ( 18) 00:32:41.412 16103.131 - 16227.962: 98.4611% ( 15) 00:32:41.412 16227.962 - 16352.792: 98.5867% ( 16) 00:32:41.412 16352.792 - 16477.623: 98.6966% ( 14) 00:32:41.412 16477.623 - 16602.453: 98.8222% ( 16) 00:32:41.412 16602.453 - 16727.284: 98.9322% ( 14) 00:32:41.412 16727.284 - 16852.114: 98.9950% ( 8) 00:32:41.412 34702.872 - 34952.533: 99.0028% ( 1) 00:32:41.412 34952.533 - 35202.194: 99.0656% ( 8) 00:32:41.412 35202.194 - 35451.855: 99.1285% ( 8) 00:32:41.412 35451.855 - 35701.516: 99.1834% ( 7) 00:32:41.412 35701.516 - 35951.177: 99.2462% ( 8) 00:32:41.412 35951.177 - 36200.838: 99.3012% ( 7) 00:32:41.412 36200.838 - 36450.499: 99.3562% ( 7) 00:32:41.412 36450.499 - 36700.160: 99.4190% ( 8) 00:32:41.412 36700.160 - 36949.821: 99.4818% ( 8) 00:32:41.412 36949.821 - 37199.482: 99.4975% ( 2) 00:32:41.412 40944.396 - 41194.057: 99.5132% ( 2) 00:32:41.412 41194.057 - 41443.718: 99.5682% ( 7) 00:32:41.412 41443.718 - 41693.379: 99.6231% ( 7) 00:32:41.412 41693.379 - 41943.040: 99.6859% ( 8) 00:32:41.412 41943.040 - 42192.701: 99.7487% ( 8) 00:32:41.412 42192.701 - 42442.362: 99.8116% ( 8) 00:32:41.412 42442.362 - 42692.023: 99.8665% ( 7) 00:32:41.412 42692.023 - 42941.684: 99.9215% ( 7) 00:32:41.412 42941.684 - 43191.345: 99.9686% ( 6) 00:32:41.412 43191.345 - 43441.006: 100.0000% ( 4) 00:32:41.412 00:32:41.412 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:32:41.412 ============================================================================== 00:32:41.412 Range in us Cumulative IO count 00:32:41.412 8301.227 - 8363.642: 0.0393% ( 5) 00:32:41.412 8363.642 - 8426.057: 0.2434% ( 26) 00:32:41.412 8426.057 - 8488.472: 0.5810% ( 43) 00:32:41.412 8488.472 - 8550.888: 1.4918% ( 116) 00:32:41.412 8550.888 - 8613.303: 3.1329% ( 209) 00:32:41.412 8613.303 - 8675.718: 5.6219% ( 317) 00:32:41.412 8675.718 - 8738.133: 8.7783% ( 402) 00:32:41.412 8738.133 - 8800.549: 12.1781% ( 433) 00:32:41.412 8800.549 - 8862.964: 16.1040% ( 500) 00:32:41.412 8862.964 - 8925.379: 20.0220% ( 499) 00:32:41.412 8925.379 - 8987.794: 23.9243% ( 497) 00:32:41.412 8987.794 - 9050.210: 28.1171% ( 534) 00:32:41.412 9050.210 - 9112.625: 32.2786% ( 530) 00:32:41.412 9112.625 - 9175.040: 36.3379% ( 517) 00:32:41.412 9175.040 - 9237.455: 40.5622% ( 538) 00:32:41.412 9237.455 - 9299.870: 44.5273% ( 505) 00:32:41.412 9299.870 - 9362.286: 48.5553% ( 513) 00:32:41.412 9362.286 - 9424.701: 52.1985% ( 464) 00:32:41.413 9424.701 - 9487.116: 55.3863% ( 406) 00:32:41.413 9487.116 - 9549.531: 57.7811% ( 305) 00:32:41.413 9549.531 - 9611.947: 59.5085% ( 220) 00:32:41.413 9611.947 - 9674.362: 60.6862% ( 150) 00:32:41.413 9674.362 - 9736.777: 61.5264% ( 107) 00:32:41.413 9736.777 - 9799.192: 62.0524% ( 67) 00:32:41.413 9799.192 - 9861.608: 62.5079% ( 58) 00:32:41.413 9861.608 - 9924.023: 63.0418% ( 68) 00:32:41.413 9924.023 - 9986.438: 63.6778% ( 81) 00:32:41.413 9986.438 - 10048.853: 64.5258% ( 108) 00:32:41.413 10048.853 - 10111.269: 65.5308% ( 128) 00:32:41.413 10111.269 - 10173.684: 66.7478% ( 155) 00:32:41.413 10173.684 - 10236.099: 68.0512% ( 166) 00:32:41.413 10236.099 - 10298.514: 69.5038% ( 185) 00:32:41.413 10298.514 - 10360.930: 70.9642% ( 186) 00:32:41.413 10360.930 - 10423.345: 72.4874% ( 194) 00:32:41.413 10423.345 - 10485.760: 74.0656% ( 201) 00:32:41.413 10485.760 - 10548.175: 75.7538% ( 215) 00:32:41.413 10548.175 - 10610.590: 77.4497% ( 216) 00:32:41.413 10610.590 - 10673.006: 79.0986% ( 210) 00:32:41.413 10673.006 - 10735.421: 80.8260% ( 220) 00:32:41.413 10735.421 - 10797.836: 82.4984% ( 213) 00:32:41.413 10797.836 - 10860.251: 84.1630% ( 212) 00:32:41.413 10860.251 - 10922.667: 85.7491% ( 202) 00:32:41.413 10922.667 - 10985.082: 87.0839% ( 170) 00:32:41.413 10985.082 - 11047.497: 88.2067% ( 143) 00:32:41.413 11047.497 - 11109.912: 89.1253% ( 117) 00:32:41.413 11109.912 - 11172.328: 90.0204% ( 114) 00:32:41.413 11172.328 - 11234.743: 90.7663% ( 95) 00:32:41.413 11234.743 - 11297.158: 91.4494% ( 87) 00:32:41.413 11297.158 - 11359.573: 92.0776% ( 80) 00:32:41.413 11359.573 - 11421.989: 92.6508% ( 73) 00:32:41.413 11421.989 - 11484.404: 93.2082% ( 71) 00:32:41.413 11484.404 - 11546.819: 93.6558% ( 57) 00:32:41.413 11546.819 - 11609.234: 94.1033% ( 57) 00:32:41.413 11609.234 - 11671.650: 94.5038% ( 51) 00:32:41.413 11671.650 - 11734.065: 94.8807% ( 48) 00:32:41.413 11734.065 - 11796.480: 95.2026% ( 41) 00:32:41.413 11796.480 - 11858.895: 95.4695% ( 34) 00:32:41.413 11858.895 - 11921.310: 95.6972% ( 29) 00:32:41.413 11921.310 - 11983.726: 95.9171% ( 28) 00:32:41.413 11983.726 - 12046.141: 96.0898% ( 22) 00:32:41.413 12046.141 - 12108.556: 96.1919% ( 13) 00:32:41.413 12108.556 - 12170.971: 96.2861% ( 12) 00:32:41.413 12170.971 - 12233.387: 96.3725% ( 11) 00:32:41.413 12233.387 - 12295.802: 96.4903% ( 15) 00:32:41.413 12295.802 - 12358.217: 96.5845% ( 12) 00:32:41.413 12358.217 - 12420.632: 96.6944% ( 14) 00:32:41.413 12420.632 - 12483.048: 96.7808% ( 11) 00:32:41.413 12483.048 - 12545.463: 96.8671% ( 11) 00:32:41.413 12545.463 - 12607.878: 96.9535% ( 11) 00:32:41.413 12607.878 - 12670.293: 97.0634% ( 14) 00:32:41.413 12670.293 - 12732.709: 97.1734% ( 14) 00:32:41.413 12732.709 - 12795.124: 97.2597% ( 11) 00:32:41.413 12795.124 - 12857.539: 97.3383% ( 10) 00:32:41.413 12857.539 - 12919.954: 97.4168% ( 10) 00:32:41.413 12919.954 - 12982.370: 97.5031% ( 11) 00:32:41.413 12982.370 - 13044.785: 97.5660% ( 8) 00:32:41.413 13044.785 - 13107.200: 97.6288% ( 8) 00:32:41.413 13107.200 - 13169.615: 97.6916% ( 8) 00:32:41.413 13169.615 - 13232.030: 97.7622% ( 9) 00:32:41.413 13232.030 - 13294.446: 97.8015% ( 5) 00:32:41.413 13294.446 - 13356.861: 97.8408% ( 5) 00:32:41.413 13356.861 - 13419.276: 97.8722% ( 4) 00:32:41.413 13419.276 - 13481.691: 97.9114% ( 5) 00:32:41.413 13481.691 - 13544.107: 97.9428% ( 4) 00:32:41.413 13544.107 - 13606.522: 97.9742% ( 4) 00:32:41.413 13606.522 - 13668.937: 97.9899% ( 2) 00:32:41.413 14105.844 - 14168.259: 98.0057% ( 2) 00:32:41.413 14168.259 - 14230.674: 98.0371% ( 4) 00:32:41.413 14230.674 - 14293.090: 98.0685% ( 4) 00:32:41.413 14293.090 - 14355.505: 98.0999% ( 4) 00:32:41.413 14355.505 - 14417.920: 98.1313% ( 4) 00:32:41.413 14417.920 - 14480.335: 98.1627% ( 4) 00:32:41.413 14480.335 - 14542.750: 98.1941% ( 4) 00:32:41.413 14542.750 - 14605.166: 98.2177% ( 3) 00:32:41.413 14605.166 - 14667.581: 98.2412% ( 3) 00:32:41.413 14667.581 - 14729.996: 98.2726% ( 4) 00:32:41.413 14729.996 - 14792.411: 98.2962% ( 3) 00:32:41.413 14792.411 - 14854.827: 98.3276% ( 4) 00:32:41.413 14854.827 - 14917.242: 98.3511% ( 3) 00:32:41.413 14917.242 - 14979.657: 98.3747% ( 3) 00:32:41.413 14979.657 - 15042.072: 98.3982% ( 3) 00:32:41.413 15042.072 - 15104.488: 98.4218% ( 3) 00:32:41.413 15104.488 - 15166.903: 98.4532% ( 4) 00:32:41.413 15166.903 - 15229.318: 98.4768% ( 3) 00:32:41.413 15229.318 - 15291.733: 98.4925% ( 2) 00:32:41.413 16227.962 - 16352.792: 98.5317% ( 5) 00:32:41.413 16352.792 - 16477.623: 98.5945% ( 8) 00:32:41.413 16477.623 - 16602.453: 98.6573% ( 8) 00:32:41.413 16602.453 - 16727.284: 98.7202% ( 8) 00:32:41.413 16727.284 - 16852.114: 98.7830% ( 8) 00:32:41.413 16852.114 - 16976.945: 98.8458% ( 8) 00:32:41.413 16976.945 - 17101.775: 98.9086% ( 8) 00:32:41.413 17101.775 - 17226.606: 98.9636% ( 7) 00:32:41.413 17226.606 - 17351.436: 98.9950% ( 4) 00:32:41.413 32705.585 - 32955.246: 99.0264% ( 4) 00:32:41.413 32955.246 - 33204.907: 99.0970% ( 9) 00:32:41.413 33204.907 - 33454.568: 99.1520% ( 7) 00:32:41.413 33454.568 - 33704.229: 99.2148% ( 8) 00:32:41.413 33704.229 - 33953.890: 99.2776% ( 8) 00:32:41.413 33953.890 - 34203.550: 99.3326% ( 7) 00:32:41.413 34203.550 - 34453.211: 99.3954% ( 8) 00:32:41.413 34453.211 - 34702.872: 99.4504% ( 7) 00:32:41.413 34702.872 - 34952.533: 99.4975% ( 6) 00:32:41.413 38947.109 - 39196.770: 99.5289% ( 4) 00:32:41.413 39196.770 - 39446.430: 99.5839% ( 7) 00:32:41.413 39446.430 - 39696.091: 99.6388% ( 7) 00:32:41.413 39696.091 - 39945.752: 99.7016% ( 8) 00:32:41.413 39945.752 - 40195.413: 99.7566% ( 7) 00:32:41.413 40195.413 - 40445.074: 99.8194% ( 8) 00:32:41.413 40445.074 - 40694.735: 99.8822% ( 8) 00:32:41.413 40694.735 - 40944.396: 99.9372% ( 7) 00:32:41.413 40944.396 - 41194.057: 99.9921% ( 7) 00:32:41.413 41194.057 - 41443.718: 100.0000% ( 1) 00:32:41.413 00:32:41.413 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:32:41.413 ============================================================================== 00:32:41.413 Range in us Cumulative IO count 00:32:41.413 8301.227 - 8363.642: 0.0707% ( 9) 00:32:41.413 8363.642 - 8426.057: 0.2356% ( 21) 00:32:41.413 8426.057 - 8488.472: 0.5889% ( 45) 00:32:41.413 8488.472 - 8550.888: 1.5075% ( 117) 00:32:41.413 8550.888 - 8613.303: 3.0072% ( 191) 00:32:41.413 8613.303 - 8675.718: 5.6690% ( 339) 00:32:41.413 8675.718 - 8738.133: 8.7076% ( 387) 00:32:41.413 8738.133 - 8800.549: 12.1074% ( 433) 00:32:41.413 8800.549 - 8862.964: 16.1825% ( 519) 00:32:41.413 8862.964 - 8925.379: 20.0534% ( 493) 00:32:41.413 8925.379 - 8987.794: 24.0578% ( 510) 00:32:41.413 8987.794 - 9050.210: 28.2192% ( 530) 00:32:41.413 9050.210 - 9112.625: 32.3492% ( 526) 00:32:41.413 9112.625 - 9175.040: 36.4086% ( 517) 00:32:41.413 9175.040 - 9237.455: 40.6721% ( 543) 00:32:41.413 9237.455 - 9299.870: 44.7236% ( 516) 00:32:41.413 9299.870 - 9362.286: 48.6259% ( 497) 00:32:41.413 9362.286 - 9424.701: 52.2535% ( 462) 00:32:41.413 9424.701 - 9487.116: 55.4020% ( 401) 00:32:41.413 9487.116 - 9549.531: 57.7811% ( 303) 00:32:41.413 9549.531 - 9611.947: 59.5477% ( 225) 00:32:41.413 9611.947 - 9674.362: 60.7491% ( 153) 00:32:41.413 9674.362 - 9736.777: 61.5499% ( 102) 00:32:41.413 9736.777 - 9799.192: 62.0289% ( 61) 00:32:41.413 9799.192 - 9861.608: 62.5157% ( 62) 00:32:41.413 9861.608 - 9924.023: 62.9633% ( 57) 00:32:41.413 9924.023 - 9986.438: 63.4736% ( 65) 00:32:41.413 9986.438 - 10048.853: 64.3373% ( 110) 00:32:41.413 10048.853 - 10111.269: 65.4052% ( 136) 00:32:41.413 10111.269 - 10173.684: 66.5358% ( 144) 00:32:41.413 10173.684 - 10236.099: 67.8785% ( 171) 00:32:41.413 10236.099 - 10298.514: 69.3389% ( 186) 00:32:41.413 10298.514 - 10360.930: 70.7993% ( 186) 00:32:41.413 10360.930 - 10423.345: 72.4089% ( 205) 00:32:41.413 10423.345 - 10485.760: 73.9479% ( 196) 00:32:41.413 10485.760 - 10548.175: 75.6046% ( 211) 00:32:41.413 10548.175 - 10610.590: 77.3084% ( 217) 00:32:41.413 10610.590 - 10673.006: 79.0280% ( 219) 00:32:41.413 10673.006 - 10735.421: 80.7867% ( 224) 00:32:41.413 10735.421 - 10797.836: 82.6084% ( 232) 00:32:41.413 10797.836 - 10860.251: 84.3436% ( 221) 00:32:41.413 10860.251 - 10922.667: 85.9454% ( 204) 00:32:41.413 10922.667 - 10985.082: 87.2959% ( 172) 00:32:41.413 10985.082 - 11047.497: 88.4108% ( 142) 00:32:41.413 11047.497 - 11109.912: 89.3687% ( 122) 00:32:41.413 11109.912 - 11172.328: 90.1774% ( 103) 00:32:41.413 11172.328 - 11234.743: 90.8527% ( 86) 00:32:41.413 11234.743 - 11297.158: 91.5358% ( 87) 00:32:41.413 11297.158 - 11359.573: 92.1561% ( 79) 00:32:41.413 11359.573 - 11421.989: 92.7214% ( 72) 00:32:41.413 11421.989 - 11484.404: 93.2867% ( 72) 00:32:41.413 11484.404 - 11546.819: 93.8050% ( 66) 00:32:41.413 11546.819 - 11609.234: 94.2839% ( 61) 00:32:41.413 11609.234 - 11671.650: 94.6530% ( 47) 00:32:41.413 11671.650 - 11734.065: 94.9827% ( 42) 00:32:41.413 11734.065 - 11796.480: 95.1790% ( 25) 00:32:41.413 11796.480 - 11858.895: 95.3596% ( 23) 00:32:41.413 11858.895 - 11921.310: 95.5559% ( 25) 00:32:41.413 11921.310 - 11983.726: 95.6894% ( 17) 00:32:41.413 11983.726 - 12046.141: 95.8307% ( 18) 00:32:41.413 12046.141 - 12108.556: 95.9328% ( 13) 00:32:41.413 12108.556 - 12170.971: 96.0663% ( 17) 00:32:41.413 12170.971 - 12233.387: 96.1526% ( 11) 00:32:41.413 12233.387 - 12295.802: 96.2626% ( 14) 00:32:41.413 12295.802 - 12358.217: 96.3882% ( 16) 00:32:41.413 12358.217 - 12420.632: 96.4824% ( 12) 00:32:41.413 12420.632 - 12483.048: 96.5845% ( 13) 00:32:41.413 12483.048 - 12545.463: 96.7101% ( 16) 00:32:41.414 12545.463 - 12607.878: 96.8200% ( 14) 00:32:41.414 12607.878 - 12670.293: 96.9300% ( 14) 00:32:41.414 12670.293 - 12732.709: 97.0320% ( 13) 00:32:41.414 12732.709 - 12795.124: 97.1341% ( 13) 00:32:41.414 12795.124 - 12857.539: 97.2205% ( 11) 00:32:41.414 12857.539 - 12919.954: 97.2754% ( 7) 00:32:41.414 12919.954 - 12982.370: 97.3304% ( 7) 00:32:41.414 12982.370 - 13044.785: 97.3932% ( 8) 00:32:41.414 13044.785 - 13107.200: 97.4482% ( 7) 00:32:41.414 13107.200 - 13169.615: 97.4874% ( 5) 00:32:41.414 13481.691 - 13544.107: 97.5188% ( 4) 00:32:41.414 13544.107 - 13606.522: 97.5581% ( 5) 00:32:41.414 13606.522 - 13668.937: 97.6288% ( 9) 00:32:41.414 13668.937 - 13731.352: 97.6837% ( 7) 00:32:41.414 13731.352 - 13793.768: 97.7544% ( 9) 00:32:41.414 13793.768 - 13856.183: 97.8172% ( 8) 00:32:41.414 13856.183 - 13918.598: 97.8879% ( 9) 00:32:41.414 13918.598 - 13981.013: 97.9428% ( 7) 00:32:41.414 13981.013 - 14043.429: 98.0057% ( 8) 00:32:41.414 14043.429 - 14105.844: 98.0763% ( 9) 00:32:41.414 14105.844 - 14168.259: 98.1313% ( 7) 00:32:41.414 14168.259 - 14230.674: 98.1941% ( 8) 00:32:41.414 14230.674 - 14293.090: 98.2648% ( 9) 00:32:41.414 14293.090 - 14355.505: 98.3276% ( 8) 00:32:41.414 14355.505 - 14417.920: 98.3825% ( 7) 00:32:41.414 14417.920 - 14480.335: 98.4139% ( 4) 00:32:41.414 14480.335 - 14542.750: 98.4454% ( 4) 00:32:41.414 14542.750 - 14605.166: 98.4768% ( 4) 00:32:41.414 14605.166 - 14667.581: 98.4925% ( 2) 00:32:41.414 16602.453 - 16727.284: 98.5317% ( 5) 00:32:41.414 16727.284 - 16852.114: 98.6338% ( 13) 00:32:41.414 16852.114 - 16976.945: 98.6573% ( 3) 00:32:41.414 16976.945 - 17101.775: 98.7437% ( 11) 00:32:41.414 17101.775 - 17226.606: 98.7908% ( 6) 00:32:41.414 17226.606 - 17351.436: 98.8536% ( 8) 00:32:41.414 17351.436 - 17476.267: 98.9086% ( 7) 00:32:41.414 17476.267 - 17601.097: 98.9714% ( 8) 00:32:41.414 17601.097 - 17725.928: 98.9950% ( 3) 00:32:41.414 30333.806 - 30458.636: 99.0264% ( 4) 00:32:41.414 30458.636 - 30583.467: 99.0578% ( 4) 00:32:41.414 30583.467 - 30708.297: 99.0892% ( 4) 00:32:41.414 30708.297 - 30833.128: 99.1206% ( 4) 00:32:41.414 30833.128 - 30957.958: 99.1520% ( 4) 00:32:41.414 30957.958 - 31082.789: 99.1834% ( 4) 00:32:41.414 31082.789 - 31207.619: 99.2148% ( 4) 00:32:41.414 31207.619 - 31332.450: 99.2462% ( 4) 00:32:41.414 31332.450 - 31457.280: 99.2776% ( 4) 00:32:41.414 31457.280 - 31582.110: 99.3090% ( 4) 00:32:41.414 31582.110 - 31706.941: 99.3405% ( 4) 00:32:41.414 31706.941 - 31831.771: 99.3719% ( 4) 00:32:41.414 31831.771 - 31956.602: 99.3954% ( 3) 00:32:41.414 31956.602 - 32206.263: 99.4582% ( 8) 00:32:41.414 32206.263 - 32455.924: 99.4975% ( 5) 00:32:41.414 36450.499 - 36700.160: 99.5367% ( 5) 00:32:41.414 36700.160 - 36949.821: 99.5996% ( 8) 00:32:41.414 36949.821 - 37199.482: 99.6545% ( 7) 00:32:41.414 37199.482 - 37449.143: 99.7095% ( 7) 00:32:41.414 37449.143 - 37698.804: 99.7644% ( 7) 00:32:41.414 37698.804 - 37948.465: 99.8273% ( 8) 00:32:41.414 37948.465 - 38198.126: 99.8901% ( 8) 00:32:41.414 38198.126 - 38447.787: 99.9529% ( 8) 00:32:41.414 38447.787 - 38697.448: 100.0000% ( 6) 00:32:41.414 00:32:41.414 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:32:41.414 ============================================================================== 00:32:41.414 Range in us Cumulative IO count 00:32:41.414 8301.227 - 8363.642: 0.0550% ( 7) 00:32:41.414 8363.642 - 8426.057: 0.2120% ( 20) 00:32:41.414 8426.057 - 8488.472: 0.5967% ( 49) 00:32:41.414 8488.472 - 8550.888: 1.3584% ( 97) 00:32:41.414 8550.888 - 8613.303: 3.0151% ( 211) 00:32:41.414 8613.303 - 8675.718: 5.4648% ( 312) 00:32:41.414 8675.718 - 8738.133: 8.5427% ( 392) 00:32:41.414 8738.133 - 8800.549: 12.1388% ( 458) 00:32:41.414 8800.549 - 8862.964: 16.0411% ( 497) 00:32:41.414 8862.964 - 8925.379: 20.1241% ( 520) 00:32:41.414 8925.379 - 8987.794: 24.2933% ( 531) 00:32:41.414 8987.794 - 9050.210: 28.4391% ( 528) 00:32:41.414 9050.210 - 9112.625: 32.6398% ( 535) 00:32:41.414 9112.625 - 9175.040: 36.6913% ( 516) 00:32:41.414 9175.040 - 9237.455: 40.7742% ( 520) 00:32:41.414 9237.455 - 9299.870: 44.7943% ( 512) 00:32:41.414 9299.870 - 9362.286: 48.6966% ( 497) 00:32:41.414 9362.286 - 9424.701: 52.3398% ( 464) 00:32:41.414 9424.701 - 9487.116: 55.4884% ( 401) 00:32:41.414 9487.116 - 9549.531: 57.7889% ( 293) 00:32:41.414 9549.531 - 9611.947: 59.4692% ( 214) 00:32:41.414 9611.947 - 9674.362: 60.7726% ( 166) 00:32:41.414 9674.362 - 9736.777: 61.5892% ( 104) 00:32:41.414 9736.777 - 9799.192: 62.0996% ( 65) 00:32:41.414 9799.192 - 9861.608: 62.4686% ( 47) 00:32:41.414 9861.608 - 9924.023: 62.8926% ( 54) 00:32:41.414 9924.023 - 9986.438: 63.4422% ( 70) 00:32:41.414 9986.438 - 10048.853: 64.2509% ( 103) 00:32:41.414 10048.853 - 10111.269: 65.2324% ( 125) 00:32:41.414 10111.269 - 10173.684: 66.4808% ( 159) 00:32:41.414 10173.684 - 10236.099: 67.6743% ( 152) 00:32:41.414 10236.099 - 10298.514: 69.1112% ( 183) 00:32:41.414 10298.514 - 10360.930: 70.5952% ( 189) 00:32:41.414 10360.930 - 10423.345: 72.2362% ( 209) 00:32:41.414 10423.345 - 10485.760: 73.8065% ( 200) 00:32:41.414 10485.760 - 10548.175: 75.4476% ( 209) 00:32:41.414 10548.175 - 10610.590: 77.1121% ( 212) 00:32:41.414 10610.590 - 10673.006: 78.7845% ( 213) 00:32:41.414 10673.006 - 10735.421: 80.5512% ( 225) 00:32:41.414 10735.421 - 10797.836: 82.3807% ( 233) 00:32:41.414 10797.836 - 10860.251: 84.1080% ( 220) 00:32:41.414 10860.251 - 10922.667: 85.5842% ( 188) 00:32:41.414 10922.667 - 10985.082: 86.9739% ( 177) 00:32:41.414 10985.082 - 11047.497: 88.1910% ( 155) 00:32:41.414 11047.497 - 11109.912: 89.1646% ( 124) 00:32:41.414 11109.912 - 11172.328: 90.0440% ( 112) 00:32:41.414 11172.328 - 11234.743: 90.8291% ( 100) 00:32:41.414 11234.743 - 11297.158: 91.4808% ( 83) 00:32:41.414 11297.158 - 11359.573: 92.0933% ( 78) 00:32:41.414 11359.573 - 11421.989: 92.6979% ( 77) 00:32:41.414 11421.989 - 11484.404: 93.2396% ( 69) 00:32:41.414 11484.404 - 11546.819: 93.7186% ( 61) 00:32:41.414 11546.819 - 11609.234: 94.1897% ( 60) 00:32:41.414 11609.234 - 11671.650: 94.6215% ( 55) 00:32:41.414 11671.650 - 11734.065: 94.9513% ( 42) 00:32:41.414 11734.065 - 11796.480: 95.2340% ( 36) 00:32:41.414 11796.480 - 11858.895: 95.5009% ( 34) 00:32:41.414 11858.895 - 11921.310: 95.6815% ( 23) 00:32:41.414 11921.310 - 11983.726: 95.8150% ( 17) 00:32:41.414 11983.726 - 12046.141: 95.9406% ( 16) 00:32:41.414 12046.141 - 12108.556: 96.0584% ( 15) 00:32:41.414 12108.556 - 12170.971: 96.1762% ( 15) 00:32:41.414 12170.971 - 12233.387: 96.2783% ( 13) 00:32:41.414 12233.387 - 12295.802: 96.3725% ( 12) 00:32:41.414 12295.802 - 12358.217: 96.4667% ( 12) 00:32:41.414 12358.217 - 12420.632: 96.5688% ( 13) 00:32:41.414 12420.632 - 12483.048: 96.6787% ( 14) 00:32:41.414 12483.048 - 12545.463: 96.7886% ( 14) 00:32:41.414 12545.463 - 12607.878: 96.9064% ( 15) 00:32:41.414 12607.878 - 12670.293: 97.0477% ( 18) 00:32:41.414 12670.293 - 12732.709: 97.2048% ( 20) 00:32:41.414 12732.709 - 12795.124: 97.2990% ( 12) 00:32:41.414 12795.124 - 12857.539: 97.3697% ( 9) 00:32:41.414 12857.539 - 12919.954: 97.4639% ( 12) 00:32:41.414 12919.954 - 12982.370: 97.5424% ( 10) 00:32:41.414 12982.370 - 13044.785: 97.6131% ( 9) 00:32:41.414 13044.785 - 13107.200: 97.6837% ( 9) 00:32:41.414 13107.200 - 13169.615: 97.7308% ( 6) 00:32:41.414 13169.615 - 13232.030: 97.7858% ( 7) 00:32:41.414 13232.030 - 13294.446: 97.8094% ( 3) 00:32:41.414 13294.446 - 13356.861: 97.8329% ( 3) 00:32:41.414 13356.861 - 13419.276: 97.8643% ( 4) 00:32:41.414 13419.276 - 13481.691: 97.8957% ( 4) 00:32:41.414 13481.691 - 13544.107: 97.9271% ( 4) 00:32:41.414 13544.107 - 13606.522: 97.9585% ( 4) 00:32:41.414 13606.522 - 13668.937: 97.9899% ( 4) 00:32:41.414 14729.996 - 14792.411: 98.0057% ( 2) 00:32:41.414 14792.411 - 14854.827: 98.0606% ( 7) 00:32:41.414 14854.827 - 14917.242: 98.0842% ( 3) 00:32:41.414 14917.242 - 14979.657: 98.1234% ( 5) 00:32:41.414 14979.657 - 15042.072: 98.1548% ( 4) 00:32:41.414 15042.072 - 15104.488: 98.1941% ( 5) 00:32:41.414 15104.488 - 15166.903: 98.2255% ( 4) 00:32:41.414 15166.903 - 15229.318: 98.2569% ( 4) 00:32:41.414 15229.318 - 15291.733: 98.2962% ( 5) 00:32:41.414 15291.733 - 15354.149: 98.3276% ( 4) 00:32:41.414 15354.149 - 15416.564: 98.3668% ( 5) 00:32:41.414 15416.564 - 15478.979: 98.3982% ( 4) 00:32:41.414 15478.979 - 15541.394: 98.4375% ( 5) 00:32:41.414 15541.394 - 15603.810: 98.4689% ( 4) 00:32:41.414 15603.810 - 15666.225: 98.4925% ( 3) 00:32:41.414 16852.114 - 16976.945: 98.5396% ( 6) 00:32:41.414 16976.945 - 17101.775: 98.5945% ( 7) 00:32:41.414 17101.775 - 17226.606: 98.6416% ( 6) 00:32:41.414 17226.606 - 17351.436: 98.6888% ( 6) 00:32:41.414 17351.436 - 17476.267: 98.7359% ( 6) 00:32:41.414 17476.267 - 17601.097: 98.7830% ( 6) 00:32:41.414 17601.097 - 17725.928: 98.8301% ( 6) 00:32:41.414 17725.928 - 17850.758: 98.8772% ( 6) 00:32:41.414 17850.758 - 17975.589: 98.9243% ( 6) 00:32:41.414 17975.589 - 18100.419: 98.9636% ( 5) 00:32:41.414 18100.419 - 18225.250: 98.9950% ( 4) 00:32:41.414 27837.196 - 27962.027: 99.0185% ( 3) 00:32:41.414 27962.027 - 28086.857: 99.0499% ( 4) 00:32:41.414 28086.857 - 28211.688: 99.0735% ( 3) 00:32:41.414 28211.688 - 28336.518: 99.1049% ( 4) 00:32:41.414 28336.518 - 28461.349: 99.1363% ( 4) 00:32:41.414 28461.349 - 28586.179: 99.1599% ( 3) 00:32:41.414 28586.179 - 28711.010: 99.1913% ( 4) 00:32:41.414 28711.010 - 28835.840: 99.2227% ( 4) 00:32:41.414 28835.840 - 28960.670: 99.2541% ( 4) 00:32:41.415 28960.670 - 29085.501: 99.2776% ( 3) 00:32:41.415 29085.501 - 29210.331: 99.3090% ( 4) 00:32:41.415 29210.331 - 29335.162: 99.3326% ( 3) 00:32:41.415 29335.162 - 29459.992: 99.3719% ( 5) 00:32:41.415 29459.992 - 29584.823: 99.3954% ( 3) 00:32:41.415 29584.823 - 29709.653: 99.4268% ( 4) 00:32:41.415 29709.653 - 29834.484: 99.4582% ( 4) 00:32:41.415 29834.484 - 29959.314: 99.4896% ( 4) 00:32:41.415 29959.314 - 30084.145: 99.4975% ( 1) 00:32:41.415 33953.890 - 34203.550: 99.5132% ( 2) 00:32:41.415 34203.550 - 34453.211: 99.5760% ( 8) 00:32:41.415 34453.211 - 34702.872: 99.6388% ( 8) 00:32:41.415 34702.872 - 34952.533: 99.7016% ( 8) 00:32:41.415 34952.533 - 35202.194: 99.7566% ( 7) 00:32:41.415 35202.194 - 35451.855: 99.8194% ( 8) 00:32:41.415 35451.855 - 35701.516: 99.8901% ( 9) 00:32:41.415 35701.516 - 35951.177: 99.9529% ( 8) 00:32:41.415 35951.177 - 36200.838: 100.0000% ( 6) 00:32:41.415 00:32:41.415 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:32:41.415 ============================================================================== 00:32:41.415 Range in us Cumulative IO count 00:32:41.415 8301.227 - 8363.642: 0.0391% ( 5) 00:32:41.415 8363.642 - 8426.057: 0.2188% ( 23) 00:32:41.415 8426.057 - 8488.472: 0.6719% ( 58) 00:32:41.415 8488.472 - 8550.888: 1.4297% ( 97) 00:32:41.415 8550.888 - 8613.303: 3.0781% ( 211) 00:32:41.415 8613.303 - 8675.718: 5.4141% ( 299) 00:32:41.415 8675.718 - 8738.133: 8.5703% ( 404) 00:32:41.415 8738.133 - 8800.549: 12.1094% ( 453) 00:32:41.415 8800.549 - 8862.964: 16.1250% ( 514) 00:32:41.415 8862.964 - 8925.379: 20.2891% ( 533) 00:32:41.415 8925.379 - 8987.794: 24.2656% ( 509) 00:32:41.415 8987.794 - 9050.210: 28.4453% ( 535) 00:32:41.415 9050.210 - 9112.625: 32.5625% ( 527) 00:32:41.415 9112.625 - 9175.040: 36.6719% ( 526) 00:32:41.415 9175.040 - 9237.455: 40.7812% ( 526) 00:32:41.415 9237.455 - 9299.870: 44.8750% ( 524) 00:32:41.415 9299.870 - 9362.286: 48.8281% ( 506) 00:32:41.415 9362.286 - 9424.701: 52.5547% ( 477) 00:32:41.415 9424.701 - 9487.116: 55.7344% ( 407) 00:32:41.415 9487.116 - 9549.531: 58.2422% ( 321) 00:32:41.415 9549.531 - 9611.947: 59.9609% ( 220) 00:32:41.415 9611.947 - 9674.362: 61.1719% ( 155) 00:32:41.415 9674.362 - 9736.777: 61.9453% ( 99) 00:32:41.415 9736.777 - 9799.192: 62.3438% ( 51) 00:32:41.415 9799.192 - 9861.608: 62.6953% ( 45) 00:32:41.415 9861.608 - 9924.023: 63.1172% ( 54) 00:32:41.415 9924.023 - 9986.438: 63.6562% ( 69) 00:32:41.415 9986.438 - 10048.853: 64.3828% ( 93) 00:32:41.415 10048.853 - 10111.269: 65.3594% ( 125) 00:32:41.415 10111.269 - 10173.684: 66.4844% ( 144) 00:32:41.415 10173.684 - 10236.099: 67.8594% ( 176) 00:32:41.415 10236.099 - 10298.514: 69.3125% ( 186) 00:32:41.415 10298.514 - 10360.930: 70.8047% ( 191) 00:32:41.415 10360.930 - 10423.345: 72.3594% ( 199) 00:32:41.415 10423.345 - 10485.760: 74.0234% ( 213) 00:32:41.415 10485.760 - 10548.175: 75.6797% ( 212) 00:32:41.415 10548.175 - 10610.590: 77.3984% ( 220) 00:32:41.415 10610.590 - 10673.006: 79.1094% ( 219) 00:32:41.415 10673.006 - 10735.421: 80.8516% ( 223) 00:32:41.415 10735.421 - 10797.836: 82.5000% ( 211) 00:32:41.415 10797.836 - 10860.251: 84.1172% ( 207) 00:32:41.415 10860.251 - 10922.667: 85.5859% ( 188) 00:32:41.415 10922.667 - 10985.082: 86.9062% ( 169) 00:32:41.415 10985.082 - 11047.497: 88.0312% ( 144) 00:32:41.415 11047.497 - 11109.912: 89.0000% ( 124) 00:32:41.415 11109.912 - 11172.328: 89.8047% ( 103) 00:32:41.415 11172.328 - 11234.743: 90.5078% ( 90) 00:32:41.415 11234.743 - 11297.158: 91.1797% ( 86) 00:32:41.415 11297.158 - 11359.573: 91.7578% ( 74) 00:32:41.415 11359.573 - 11421.989: 92.3672% ( 78) 00:32:41.415 11421.989 - 11484.404: 92.9453% ( 74) 00:32:41.415 11484.404 - 11546.819: 93.4922% ( 70) 00:32:41.415 11546.819 - 11609.234: 94.0156% ( 67) 00:32:41.415 11609.234 - 11671.650: 94.4844% ( 60) 00:32:41.415 11671.650 - 11734.065: 94.8594% ( 48) 00:32:41.415 11734.065 - 11796.480: 95.1797% ( 41) 00:32:41.415 11796.480 - 11858.895: 95.4531% ( 35) 00:32:41.415 11858.895 - 11921.310: 95.6562% ( 26) 00:32:41.415 11921.310 - 11983.726: 95.8359% ( 23) 00:32:41.415 11983.726 - 12046.141: 96.0000% ( 21) 00:32:41.415 12046.141 - 12108.556: 96.1719% ( 22) 00:32:41.415 12108.556 - 12170.971: 96.3672% ( 25) 00:32:41.415 12170.971 - 12233.387: 96.5391% ( 22) 00:32:41.415 12233.387 - 12295.802: 96.7031% ( 21) 00:32:41.415 12295.802 - 12358.217: 96.8281% ( 16) 00:32:41.415 12358.217 - 12420.632: 96.9375% ( 14) 00:32:41.415 12420.632 - 12483.048: 97.0391% ( 13) 00:32:41.415 12483.048 - 12545.463: 97.1094% ( 9) 00:32:41.415 12545.463 - 12607.878: 97.1953% ( 11) 00:32:41.415 12607.878 - 12670.293: 97.2656% ( 9) 00:32:41.415 12670.293 - 12732.709: 97.3516% ( 11) 00:32:41.415 12732.709 - 12795.124: 97.4375% ( 11) 00:32:41.415 12795.124 - 12857.539: 97.5234% ( 11) 00:32:41.415 12857.539 - 12919.954: 97.6016% ( 10) 00:32:41.415 12919.954 - 12982.370: 97.6797% ( 10) 00:32:41.415 12982.370 - 13044.785: 97.7734% ( 12) 00:32:41.415 13044.785 - 13107.200: 97.8125% ( 5) 00:32:41.415 13107.200 - 13169.615: 97.8672% ( 7) 00:32:41.415 13169.615 - 13232.030: 97.8984% ( 4) 00:32:41.415 13232.030 - 13294.446: 97.9141% ( 2) 00:32:41.415 13294.446 - 13356.861: 97.9375% ( 3) 00:32:41.415 13356.861 - 13419.276: 97.9531% ( 2) 00:32:41.415 13419.276 - 13481.691: 97.9766% ( 3) 00:32:41.415 13481.691 - 13544.107: 98.0000% ( 3) 00:32:41.415 15354.149 - 15416.564: 98.0078% ( 1) 00:32:41.415 15416.564 - 15478.979: 98.0391% ( 4) 00:32:41.415 15478.979 - 15541.394: 98.0781% ( 5) 00:32:41.415 15541.394 - 15603.810: 98.1094% ( 4) 00:32:41.415 15603.810 - 15666.225: 98.1406% ( 4) 00:32:41.415 15666.225 - 15728.640: 98.1953% ( 7) 00:32:41.415 15728.640 - 15791.055: 98.2266% ( 4) 00:32:41.415 15791.055 - 15853.470: 98.2656% ( 5) 00:32:41.415 15853.470 - 15915.886: 98.2969% ( 4) 00:32:41.415 15915.886 - 15978.301: 98.3359% ( 5) 00:32:41.415 15978.301 - 16103.131: 98.3984% ( 8) 00:32:41.415 16103.131 - 16227.962: 98.4688% ( 9) 00:32:41.415 16227.962 - 16352.792: 98.5000% ( 4) 00:32:41.415 17226.606 - 17351.436: 98.5312% ( 4) 00:32:41.415 17351.436 - 17476.267: 98.5781% ( 6) 00:32:41.415 17476.267 - 17601.097: 98.6250% ( 6) 00:32:41.415 17601.097 - 17725.928: 98.6719% ( 6) 00:32:41.415 17725.928 - 17850.758: 98.7188% ( 6) 00:32:41.415 17850.758 - 17975.589: 98.7734% ( 7) 00:32:41.415 17975.589 - 18100.419: 98.8281% ( 7) 00:32:41.415 18100.419 - 18225.250: 98.8906% ( 8) 00:32:41.415 18225.250 - 18350.080: 98.9453% ( 7) 00:32:41.415 18350.080 - 18474.910: 98.9922% ( 6) 00:32:41.415 18474.910 - 18599.741: 99.0000% ( 1) 00:32:41.415 20846.690 - 20971.520: 99.0078% ( 1) 00:32:41.415 20971.520 - 21096.350: 99.0391% ( 4) 00:32:41.415 21096.350 - 21221.181: 99.0703% ( 4) 00:32:41.415 21221.181 - 21346.011: 99.1016% ( 4) 00:32:41.415 21346.011 - 21470.842: 99.1328% ( 4) 00:32:41.415 21470.842 - 21595.672: 99.1562% ( 3) 00:32:41.415 21595.672 - 21720.503: 99.1875% ( 4) 00:32:41.415 21720.503 - 21845.333: 99.2188% ( 4) 00:32:41.415 21845.333 - 21970.164: 99.2500% ( 4) 00:32:41.415 21970.164 - 22094.994: 99.2812% ( 4) 00:32:41.415 22094.994 - 22219.825: 99.3125% ( 4) 00:32:41.415 22219.825 - 22344.655: 99.3438% ( 4) 00:32:41.415 22344.655 - 22469.486: 99.3750% ( 4) 00:32:41.415 22469.486 - 22594.316: 99.4062% ( 4) 00:32:41.415 22594.316 - 22719.147: 99.4375% ( 4) 00:32:41.415 22719.147 - 22843.977: 99.4609% ( 3) 00:32:41.415 22843.977 - 22968.808: 99.4922% ( 4) 00:32:41.415 22968.808 - 23093.638: 99.5000% ( 1) 00:32:41.415 26963.383 - 27088.213: 99.5078% ( 1) 00:32:41.415 27088.213 - 27213.044: 99.5391% ( 4) 00:32:41.415 27213.044 - 27337.874: 99.5625% ( 3) 00:32:41.415 27337.874 - 27462.705: 99.6016% ( 5) 00:32:41.415 27462.705 - 27587.535: 99.6328% ( 4) 00:32:41.415 27587.535 - 27712.366: 99.6641% ( 4) 00:32:41.415 27712.366 - 27837.196: 99.6953% ( 4) 00:32:41.415 27837.196 - 27962.027: 99.7266% ( 4) 00:32:41.415 27962.027 - 28086.857: 99.7578% ( 4) 00:32:41.415 28086.857 - 28211.688: 99.7891% ( 4) 00:32:41.415 28211.688 - 28336.518: 99.8203% ( 4) 00:32:41.416 28336.518 - 28461.349: 99.8516% ( 4) 00:32:41.416 28461.349 - 28586.179: 99.8828% ( 4) 00:32:41.416 28586.179 - 28711.010: 99.9141% ( 4) 00:32:41.416 28711.010 - 28835.840: 99.9453% ( 4) 00:32:41.416 28835.840 - 28960.670: 99.9844% ( 5) 00:32:41.416 28960.670 - 29085.501: 100.0000% ( 2) 00:32:41.416 00:32:41.416 13:53:38 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:32:42.795 Initializing NVMe Controllers 00:32:42.795 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:42.795 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:32:42.795 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:32:42.795 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:32:42.795 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:32:42.795 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:32:42.795 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:32:42.795 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:32:42.795 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:32:42.795 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:32:42.795 Initialization complete. Launching workers. 00:32:42.795 ======================================================== 00:32:42.795 Latency(us) 00:32:42.795 Device Information : IOPS MiB/s Average min max 00:32:42.795 PCIE (0000:00:10.0) NSID 1 from core 0: 11191.28 131.15 11485.58 8979.70 39915.47 00:32:42.795 PCIE (0000:00:11.0) NSID 1 from core 0: 11191.28 131.15 11472.58 8978.50 37705.75 00:32:42.795 PCIE (0000:00:13.0) NSID 1 from core 0: 11191.28 131.15 11459.81 8973.72 36480.14 00:32:42.795 PCIE (0000:00:12.0) NSID 1 from core 0: 11191.28 131.15 11446.86 8925.91 34482.18 00:32:42.795 PCIE (0000:00:12.0) NSID 2 from core 0: 11191.28 131.15 11434.14 9172.63 32550.21 00:32:42.795 PCIE (0000:00:12.0) NSID 3 from core 0: 11255.23 131.90 11356.86 9119.77 25178.61 00:32:42.795 ======================================================== 00:32:42.795 Total : 67211.64 787.64 11442.56 8925.91 39915.47 00:32:42.795 00:32:42.795 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:32:42.795 ================================================================================= 00:32:42.795 1.00000% : 9362.286us 00:32:42.795 10.00000% : 10048.853us 00:32:42.795 25.00000% : 10485.760us 00:32:42.795 50.00000% : 11109.912us 00:32:42.795 75.00000% : 11983.726us 00:32:42.795 90.00000% : 12795.124us 00:32:42.795 95.00000% : 13232.030us 00:32:42.795 98.00000% : 13856.183us 00:32:42.795 99.00000% : 31207.619us 00:32:42.795 99.50000% : 38198.126us 00:32:42.795 99.90000% : 39696.091us 00:32:42.795 99.99000% : 39945.752us 00:32:42.795 99.99900% : 39945.752us 00:32:42.795 99.99990% : 39945.752us 00:32:42.795 99.99999% : 39945.752us 00:32:42.795 00:32:42.795 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:32:42.795 ================================================================================= 00:32:42.795 1.00000% : 9549.531us 00:32:42.795 10.00000% : 10048.853us 00:32:42.795 25.00000% : 10610.590us 00:32:42.795 50.00000% : 11047.497us 00:32:42.795 75.00000% : 11983.726us 00:32:42.795 90.00000% : 12732.709us 00:32:42.795 95.00000% : 13107.200us 00:32:42.795 98.00000% : 13606.522us 00:32:42.795 99.00000% : 29210.331us 00:32:42.795 99.50000% : 36200.838us 00:32:42.795 99.90000% : 37449.143us 00:32:42.795 99.99000% : 37698.804us 00:32:42.795 99.99900% : 37948.465us 00:32:42.795 99.99990% : 37948.465us 00:32:42.795 99.99999% : 37948.465us 00:32:42.795 00:32:42.795 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:32:42.795 ================================================================================= 00:32:42.795 1.00000% : 9549.531us 00:32:42.795 10.00000% : 10048.853us 00:32:42.795 25.00000% : 10610.590us 00:32:42.795 50.00000% : 10985.082us 00:32:42.795 75.00000% : 11921.310us 00:32:42.795 90.00000% : 12732.709us 00:32:42.795 95.00000% : 13169.615us 00:32:42.795 98.00000% : 13668.937us 00:32:42.795 99.00000% : 28086.857us 00:32:42.795 99.50000% : 34952.533us 00:32:42.795 99.90000% : 36200.838us 00:32:42.795 99.99000% : 36450.499us 00:32:42.795 99.99900% : 36700.160us 00:32:42.795 99.99990% : 36700.160us 00:32:42.795 99.99999% : 36700.160us 00:32:42.795 00:32:42.795 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:32:42.795 ================================================================================= 00:32:42.795 1.00000% : 9549.531us 00:32:42.795 10.00000% : 10048.853us 00:32:42.795 25.00000% : 10610.590us 00:32:42.795 50.00000% : 11047.497us 00:32:42.795 75.00000% : 11921.310us 00:32:42.795 90.00000% : 12795.124us 00:32:42.795 95.00000% : 13169.615us 00:32:42.795 98.00000% : 13793.768us 00:32:42.795 99.00000% : 26214.400us 00:32:42.795 99.50000% : 32955.246us 00:32:42.795 99.90000% : 34203.550us 00:32:42.795 99.99000% : 34702.872us 00:32:42.795 99.99900% : 34702.872us 00:32:42.795 99.99990% : 34702.872us 00:32:42.795 99.99999% : 34702.872us 00:32:42.795 00:32:42.795 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:32:42.795 ================================================================================= 00:32:42.795 1.00000% : 9549.531us 00:32:42.795 10.00000% : 10111.269us 00:32:42.795 25.00000% : 10548.175us 00:32:42.795 50.00000% : 10985.082us 00:32:42.795 75.00000% : 11983.726us 00:32:42.795 90.00000% : 12795.124us 00:32:42.795 95.00000% : 13169.615us 00:32:42.795 98.00000% : 13918.598us 00:32:42.795 99.00000% : 24466.773us 00:32:42.795 99.50000% : 30957.958us 00:32:42.795 99.90000% : 32455.924us 00:32:42.795 99.99000% : 32705.585us 00:32:42.795 99.99900% : 32705.585us 00:32:42.795 99.99990% : 32705.585us 00:32:42.795 99.99999% : 32705.585us 00:32:42.795 00:32:42.795 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:32:42.795 ================================================================================= 00:32:42.795 1.00000% : 9611.947us 00:32:42.795 10.00000% : 10111.269us 00:32:42.795 25.00000% : 10610.590us 00:32:42.795 50.00000% : 10985.082us 00:32:42.795 75.00000% : 11983.726us 00:32:42.795 90.00000% : 12732.709us 00:32:42.795 95.00000% : 13169.615us 00:32:42.795 98.00000% : 14542.750us 00:32:42.795 99.00000% : 18724.571us 00:32:42.795 99.50000% : 23592.960us 00:32:42.795 99.90000% : 24966.095us 00:32:42.795 99.99000% : 25215.756us 00:32:42.795 99.99900% : 25215.756us 00:32:42.795 99.99990% : 25215.756us 00:32:42.795 99.99999% : 25215.756us 00:32:42.795 00:32:42.795 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:32:42.795 ============================================================================== 00:32:42.795 Range in us Cumulative IO count 00:32:42.795 8925.379 - 8987.794: 0.0179% ( 2) 00:32:42.795 8987.794 - 9050.210: 0.0268% ( 1) 00:32:42.795 9050.210 - 9112.625: 0.0714% ( 5) 00:32:42.795 9112.625 - 9175.040: 0.1429% ( 8) 00:32:42.795 9175.040 - 9237.455: 0.1964% ( 6) 00:32:42.795 9237.455 - 9299.870: 0.5000% ( 34) 00:32:42.795 9299.870 - 9362.286: 1.2946% ( 89) 00:32:42.795 9362.286 - 9424.701: 2.2679% ( 109) 00:32:42.795 9424.701 - 9487.116: 2.8482% ( 65) 00:32:42.795 9487.116 - 9549.531: 3.6339% ( 88) 00:32:42.795 9549.531 - 9611.947: 4.4286% ( 89) 00:32:42.795 9611.947 - 9674.362: 5.1161% ( 77) 00:32:42.795 9674.362 - 9736.777: 5.9375% ( 92) 00:32:42.795 9736.777 - 9799.192: 6.6875% ( 84) 00:32:42.795 9799.192 - 9861.608: 7.2768% ( 66) 00:32:42.795 9861.608 - 9924.023: 8.0893% ( 91) 00:32:42.795 9924.023 - 9986.438: 8.9821% ( 100) 00:32:42.795 9986.438 - 10048.853: 10.0536% ( 120) 00:32:42.795 10048.853 - 10111.269: 11.5000% ( 162) 00:32:42.795 10111.269 - 10173.684: 13.6339% ( 239) 00:32:42.795 10173.684 - 10236.099: 15.6518% ( 226) 00:32:42.795 10236.099 - 10298.514: 18.2321% ( 289) 00:32:42.795 10298.514 - 10360.930: 21.0804% ( 319) 00:32:42.795 10360.930 - 10423.345: 24.1429% ( 343) 00:32:42.795 10423.345 - 10485.760: 27.0714% ( 328) 00:32:42.795 10485.760 - 10548.175: 29.5625% ( 279) 00:32:42.795 10548.175 - 10610.590: 32.4018% ( 318) 00:32:42.795 10610.590 - 10673.006: 34.8393% ( 273) 00:32:42.795 10673.006 - 10735.421: 37.1161% ( 255) 00:32:42.795 10735.421 - 10797.836: 39.3929% ( 255) 00:32:42.795 10797.836 - 10860.251: 41.9732% ( 289) 00:32:42.795 10860.251 - 10922.667: 44.3036% ( 261) 00:32:42.795 10922.667 - 10985.082: 46.6964% ( 268) 00:32:42.795 10985.082 - 11047.497: 49.2054% ( 281) 00:32:42.795 11047.497 - 11109.912: 51.5536% ( 263) 00:32:42.795 11109.912 - 11172.328: 53.9464% ( 268) 00:32:42.795 11172.328 - 11234.743: 56.2857% ( 262) 00:32:42.795 11234.743 - 11297.158: 58.3750% ( 234) 00:32:42.795 11297.158 - 11359.573: 60.6875% ( 259) 00:32:42.795 11359.573 - 11421.989: 62.5982% ( 214) 00:32:42.795 11421.989 - 11484.404: 64.5625% ( 220) 00:32:42.795 11484.404 - 11546.819: 66.3929% ( 205) 00:32:42.795 11546.819 - 11609.234: 68.0089% ( 181) 00:32:42.795 11609.234 - 11671.650: 69.5446% ( 172) 00:32:42.795 11671.650 - 11734.065: 70.7946% ( 140) 00:32:42.795 11734.065 - 11796.480: 72.0982% ( 146) 00:32:42.795 11796.480 - 11858.895: 73.4464% ( 151) 00:32:42.796 11858.895 - 11921.310: 74.6518% ( 135) 00:32:42.796 11921.310 - 11983.726: 75.8661% ( 136) 00:32:42.796 11983.726 - 12046.141: 77.0982% ( 138) 00:32:42.796 12046.141 - 12108.556: 78.4107% ( 147) 00:32:42.796 12108.556 - 12170.971: 79.5179% ( 124) 00:32:42.796 12170.971 - 12233.387: 80.7768% ( 141) 00:32:42.796 12233.387 - 12295.802: 81.9286% ( 129) 00:32:42.796 12295.802 - 12358.217: 83.1339% ( 135) 00:32:42.796 12358.217 - 12420.632: 84.1518% ( 114) 00:32:42.796 12420.632 - 12483.048: 85.5804% ( 160) 00:32:42.796 12483.048 - 12545.463: 86.6518% ( 120) 00:32:42.796 12545.463 - 12607.878: 87.5804% ( 104) 00:32:42.796 12607.878 - 12670.293: 88.6071% ( 115) 00:32:42.796 12670.293 - 12732.709: 89.6607% ( 118) 00:32:42.796 12732.709 - 12795.124: 90.5268% ( 97) 00:32:42.796 12795.124 - 12857.539: 91.3571% ( 93) 00:32:42.796 12857.539 - 12919.954: 92.1518% ( 89) 00:32:42.796 12919.954 - 12982.370: 92.9911% ( 94) 00:32:42.796 12982.370 - 13044.785: 93.5893% ( 67) 00:32:42.796 13044.785 - 13107.200: 94.2143% ( 70) 00:32:42.796 13107.200 - 13169.615: 94.7589% ( 61) 00:32:42.796 13169.615 - 13232.030: 95.2946% ( 60) 00:32:42.796 13232.030 - 13294.446: 95.8036% ( 57) 00:32:42.796 13294.446 - 13356.861: 96.1875% ( 43) 00:32:42.796 13356.861 - 13419.276: 96.5268% ( 38) 00:32:42.796 13419.276 - 13481.691: 96.8482% ( 36) 00:32:42.796 13481.691 - 13544.107: 97.2143% ( 41) 00:32:42.796 13544.107 - 13606.522: 97.4286% ( 24) 00:32:42.796 13606.522 - 13668.937: 97.6518% ( 25) 00:32:42.796 13668.937 - 13731.352: 97.8304% ( 20) 00:32:42.796 13731.352 - 13793.768: 97.9464% ( 13) 00:32:42.796 13793.768 - 13856.183: 98.0982% ( 17) 00:32:42.796 13856.183 - 13918.598: 98.2143% ( 13) 00:32:42.796 13918.598 - 13981.013: 98.2679% ( 6) 00:32:42.796 13981.013 - 14043.429: 98.2857% ( 2) 00:32:42.796 14417.920 - 14480.335: 98.3036% ( 2) 00:32:42.796 14480.335 - 14542.750: 98.3393% ( 4) 00:32:42.796 14542.750 - 14605.166: 98.3661% ( 3) 00:32:42.796 14605.166 - 14667.581: 98.3929% ( 3) 00:32:42.796 14667.581 - 14729.996: 98.4286% ( 4) 00:32:42.796 14729.996 - 14792.411: 98.4554% ( 3) 00:32:42.796 14792.411 - 14854.827: 98.4643% ( 1) 00:32:42.796 14854.827 - 14917.242: 98.5179% ( 6) 00:32:42.796 14917.242 - 14979.657: 98.5357% ( 2) 00:32:42.796 14979.657 - 15042.072: 98.5804% ( 5) 00:32:42.796 15042.072 - 15104.488: 98.5893% ( 1) 00:32:42.796 15104.488 - 15166.903: 98.6161% ( 3) 00:32:42.796 15166.903 - 15229.318: 98.6429% ( 3) 00:32:42.796 15291.733 - 15354.149: 98.6518% ( 1) 00:32:42.796 15416.564 - 15478.979: 98.6607% ( 1) 00:32:42.796 15478.979 - 15541.394: 98.6696% ( 1) 00:32:42.796 15603.810 - 15666.225: 98.6786% ( 1) 00:32:42.796 15728.640 - 15791.055: 98.7054% ( 3) 00:32:42.796 15791.055 - 15853.470: 98.7143% ( 1) 00:32:42.796 15853.470 - 15915.886: 98.7411% ( 3) 00:32:42.796 15978.301 - 16103.131: 98.7768% ( 4) 00:32:42.796 16103.131 - 16227.962: 98.8125% ( 4) 00:32:42.796 16227.962 - 16352.792: 98.8393% ( 3) 00:32:42.796 16352.792 - 16477.623: 98.8571% ( 2) 00:32:42.796 30708.297 - 30833.128: 98.9018% ( 5) 00:32:42.796 30833.128 - 30957.958: 98.9196% ( 2) 00:32:42.796 30957.958 - 31082.789: 98.9732% ( 6) 00:32:42.796 31082.789 - 31207.619: 99.0268% ( 6) 00:32:42.796 31207.619 - 31332.450: 99.0446% ( 2) 00:32:42.796 31332.450 - 31457.280: 99.0893% ( 5) 00:32:42.796 31457.280 - 31582.110: 99.1071% ( 2) 00:32:42.796 31582.110 - 31706.941: 99.1339% ( 3) 00:32:42.796 31706.941 - 31831.771: 99.1696% ( 4) 00:32:42.796 31831.771 - 31956.602: 99.1964% ( 3) 00:32:42.796 31956.602 - 32206.263: 99.2679% ( 8) 00:32:42.796 32206.263 - 32455.924: 99.3304% ( 7) 00:32:42.796 32455.924 - 32705.585: 99.3929% ( 7) 00:32:42.796 32705.585 - 32955.246: 99.4286% ( 4) 00:32:42.796 37698.804 - 37948.465: 99.4554% ( 3) 00:32:42.796 37948.465 - 38198.126: 99.5268% ( 8) 00:32:42.796 38198.126 - 38447.787: 99.5982% ( 8) 00:32:42.796 38447.787 - 38697.448: 99.6786% ( 9) 00:32:42.796 38697.448 - 38947.109: 99.7500% ( 8) 00:32:42.796 38947.109 - 39196.770: 99.8036% ( 6) 00:32:42.796 39196.770 - 39446.430: 99.8750% ( 8) 00:32:42.796 39446.430 - 39696.091: 99.9464% ( 8) 00:32:42.796 39696.091 - 39945.752: 100.0000% ( 6) 00:32:42.796 00:32:42.796 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:32:42.796 ============================================================================== 00:32:42.796 Range in us Cumulative IO count 00:32:42.796 8925.379 - 8987.794: 0.0089% ( 1) 00:32:42.796 8987.794 - 9050.210: 0.0179% ( 1) 00:32:42.796 9050.210 - 9112.625: 0.0268% ( 1) 00:32:42.796 9112.625 - 9175.040: 0.0357% ( 1) 00:32:42.796 9237.455 - 9299.870: 0.0625% ( 3) 00:32:42.796 9299.870 - 9362.286: 0.1518% ( 10) 00:32:42.796 9362.286 - 9424.701: 0.3304% ( 20) 00:32:42.796 9424.701 - 9487.116: 0.7589% ( 48) 00:32:42.796 9487.116 - 9549.531: 1.4375% ( 76) 00:32:42.796 9549.531 - 9611.947: 2.3214% ( 99) 00:32:42.796 9611.947 - 9674.362: 3.1339% ( 91) 00:32:42.796 9674.362 - 9736.777: 4.5268% ( 156) 00:32:42.796 9736.777 - 9799.192: 5.6339% ( 124) 00:32:42.796 9799.192 - 9861.608: 6.7679% ( 127) 00:32:42.796 9861.608 - 9924.023: 7.8839% ( 125) 00:32:42.796 9924.023 - 9986.438: 9.0446% ( 130) 00:32:42.796 9986.438 - 10048.853: 10.5089% ( 164) 00:32:42.796 10048.853 - 10111.269: 11.4107% ( 101) 00:32:42.796 10111.269 - 10173.684: 12.3125% ( 101) 00:32:42.796 10173.684 - 10236.099: 13.3214% ( 113) 00:32:42.796 10236.099 - 10298.514: 14.5625% ( 139) 00:32:42.796 10298.514 - 10360.930: 16.0446% ( 166) 00:32:42.796 10360.930 - 10423.345: 18.1339% ( 234) 00:32:42.796 10423.345 - 10485.760: 20.6964% ( 287) 00:32:42.796 10485.760 - 10548.175: 24.1339% ( 385) 00:32:42.796 10548.175 - 10610.590: 28.1607% ( 451) 00:32:42.796 10610.590 - 10673.006: 32.3125% ( 465) 00:32:42.796 10673.006 - 10735.421: 36.4286% ( 461) 00:32:42.796 10735.421 - 10797.836: 40.5179% ( 458) 00:32:42.796 10797.836 - 10860.251: 44.2054% ( 413) 00:32:42.796 10860.251 - 10922.667: 47.0446% ( 318) 00:32:42.796 10922.667 - 10985.082: 49.6964% ( 297) 00:32:42.796 10985.082 - 11047.497: 52.4196% ( 305) 00:32:42.796 11047.497 - 11109.912: 54.8214% ( 269) 00:32:42.796 11109.912 - 11172.328: 57.0893% ( 254) 00:32:42.796 11172.328 - 11234.743: 58.9375% ( 207) 00:32:42.796 11234.743 - 11297.158: 60.8393% ( 213) 00:32:42.796 11297.158 - 11359.573: 62.4821% ( 184) 00:32:42.796 11359.573 - 11421.989: 64.0982% ( 181) 00:32:42.796 11421.989 - 11484.404: 65.5804% ( 166) 00:32:42.796 11484.404 - 11546.819: 66.9554% ( 154) 00:32:42.796 11546.819 - 11609.234: 68.4375% ( 166) 00:32:42.796 11609.234 - 11671.650: 69.6518% ( 136) 00:32:42.796 11671.650 - 11734.065: 70.7054% ( 118) 00:32:42.796 11734.065 - 11796.480: 71.9286% ( 137) 00:32:42.796 11796.480 - 11858.895: 73.2143% ( 144) 00:32:42.796 11858.895 - 11921.310: 74.4643% ( 140) 00:32:42.796 11921.310 - 11983.726: 75.8839% ( 159) 00:32:42.797 11983.726 - 12046.141: 76.9732% ( 122) 00:32:42.797 12046.141 - 12108.556: 78.0268% ( 118) 00:32:42.797 12108.556 - 12170.971: 79.0625% ( 116) 00:32:42.797 12170.971 - 12233.387: 80.0893% ( 115) 00:32:42.797 12233.387 - 12295.802: 80.9821% ( 100) 00:32:42.797 12295.802 - 12358.217: 82.3750% ( 156) 00:32:42.797 12358.217 - 12420.632: 83.8839% ( 169) 00:32:42.797 12420.632 - 12483.048: 85.2946% ( 158) 00:32:42.797 12483.048 - 12545.463: 86.7857% ( 167) 00:32:42.797 12545.463 - 12607.878: 88.2054% ( 159) 00:32:42.797 12607.878 - 12670.293: 89.5625% ( 152) 00:32:42.797 12670.293 - 12732.709: 90.5089% ( 106) 00:32:42.797 12732.709 - 12795.124: 91.4107% ( 101) 00:32:42.797 12795.124 - 12857.539: 92.2500% ( 94) 00:32:42.797 12857.539 - 12919.954: 93.1875% ( 105) 00:32:42.797 12919.954 - 12982.370: 94.0357% ( 95) 00:32:42.797 12982.370 - 13044.785: 94.7500% ( 80) 00:32:42.797 13044.785 - 13107.200: 95.4286% ( 76) 00:32:42.797 13107.200 - 13169.615: 96.1250% ( 78) 00:32:42.797 13169.615 - 13232.030: 96.5893% ( 52) 00:32:42.797 13232.030 - 13294.446: 96.9464% ( 40) 00:32:42.797 13294.446 - 13356.861: 97.2411% ( 33) 00:32:42.797 13356.861 - 13419.276: 97.5357% ( 33) 00:32:42.797 13419.276 - 13481.691: 97.7679% ( 26) 00:32:42.797 13481.691 - 13544.107: 97.8929% ( 14) 00:32:42.797 13544.107 - 13606.522: 98.0446% ( 17) 00:32:42.797 13606.522 - 13668.937: 98.1607% ( 13) 00:32:42.797 13668.937 - 13731.352: 98.2143% ( 6) 00:32:42.797 13731.352 - 13793.768: 98.2679% ( 6) 00:32:42.797 13793.768 - 13856.183: 98.2857% ( 2) 00:32:42.797 15354.149 - 15416.564: 98.2946% ( 1) 00:32:42.797 15416.564 - 15478.979: 98.3125% ( 2) 00:32:42.797 15478.979 - 15541.394: 98.3304% ( 2) 00:32:42.797 15541.394 - 15603.810: 98.3571% ( 3) 00:32:42.797 15603.810 - 15666.225: 98.3750% ( 2) 00:32:42.797 15666.225 - 15728.640: 98.3929% ( 2) 00:32:42.797 15728.640 - 15791.055: 98.4107% ( 2) 00:32:42.797 15791.055 - 15853.470: 98.4286% ( 2) 00:32:42.797 15853.470 - 15915.886: 98.4464% ( 2) 00:32:42.797 15915.886 - 15978.301: 98.4643% ( 2) 00:32:42.797 15978.301 - 16103.131: 98.5179% ( 6) 00:32:42.797 16103.131 - 16227.962: 98.5536% ( 4) 00:32:42.797 16227.962 - 16352.792: 98.5804% ( 3) 00:32:42.797 16352.792 - 16477.623: 98.6964% ( 13) 00:32:42.797 16477.623 - 16602.453: 98.7679% ( 8) 00:32:42.797 16602.453 - 16727.284: 98.8125% ( 5) 00:32:42.797 16727.284 - 16852.114: 98.8571% ( 5) 00:32:42.797 28586.179 - 28711.010: 98.8661% ( 1) 00:32:42.797 28711.010 - 28835.840: 98.9107% ( 5) 00:32:42.797 28835.840 - 28960.670: 98.9464% ( 4) 00:32:42.797 28960.670 - 29085.501: 98.9821% ( 4) 00:32:42.797 29085.501 - 29210.331: 99.0268% ( 5) 00:32:42.797 29210.331 - 29335.162: 99.0625% ( 4) 00:32:42.797 29335.162 - 29459.992: 99.1071% ( 5) 00:32:42.797 29459.992 - 29584.823: 99.1429% ( 4) 00:32:42.797 29584.823 - 29709.653: 99.1875% ( 5) 00:32:42.797 29709.653 - 29834.484: 99.2232% ( 4) 00:32:42.797 29834.484 - 29959.314: 99.2589% ( 4) 00:32:42.797 29959.314 - 30084.145: 99.2946% ( 4) 00:32:42.797 30084.145 - 30208.975: 99.3304% ( 4) 00:32:42.797 30208.975 - 30333.806: 99.3661% ( 4) 00:32:42.797 30333.806 - 30458.636: 99.4018% ( 4) 00:32:42.797 30458.636 - 30583.467: 99.4286% ( 3) 00:32:42.797 35701.516 - 35951.177: 99.4732% ( 5) 00:32:42.797 35951.177 - 36200.838: 99.5536% ( 9) 00:32:42.797 36200.838 - 36450.499: 99.6250% ( 8) 00:32:42.797 36450.499 - 36700.160: 99.6964% ( 8) 00:32:42.797 36700.160 - 36949.821: 99.7768% ( 9) 00:32:42.797 36949.821 - 37199.482: 99.8571% ( 9) 00:32:42.797 37199.482 - 37449.143: 99.9196% ( 7) 00:32:42.797 37449.143 - 37698.804: 99.9911% ( 8) 00:32:42.797 37698.804 - 37948.465: 100.0000% ( 1) 00:32:42.797 00:32:42.797 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:32:42.797 ============================================================================== 00:32:42.797 Range in us Cumulative IO count 00:32:42.797 8925.379 - 8987.794: 0.0089% ( 1) 00:32:42.797 9050.210 - 9112.625: 0.0179% ( 1) 00:32:42.797 9112.625 - 9175.040: 0.0357% ( 2) 00:32:42.797 9237.455 - 9299.870: 0.0982% ( 7) 00:32:42.797 9299.870 - 9362.286: 0.2054% ( 12) 00:32:42.797 9362.286 - 9424.701: 0.3304% ( 14) 00:32:42.797 9424.701 - 9487.116: 0.6429% ( 35) 00:32:42.797 9487.116 - 9549.531: 1.0893% ( 50) 00:32:42.797 9549.531 - 9611.947: 1.9554% ( 97) 00:32:42.797 9611.947 - 9674.362: 3.0714% ( 125) 00:32:42.797 9674.362 - 9736.777: 4.3214% ( 140) 00:32:42.797 9736.777 - 9799.192: 5.4554% ( 127) 00:32:42.797 9799.192 - 9861.608: 7.4107% ( 219) 00:32:42.797 9861.608 - 9924.023: 8.3929% ( 110) 00:32:42.797 9924.023 - 9986.438: 9.8750% ( 166) 00:32:42.797 9986.438 - 10048.853: 10.5446% ( 75) 00:32:42.797 10048.853 - 10111.269: 11.4018% ( 96) 00:32:42.797 10111.269 - 10173.684: 12.6429% ( 139) 00:32:42.797 10173.684 - 10236.099: 13.6250% ( 110) 00:32:42.797 10236.099 - 10298.514: 14.6518% ( 115) 00:32:42.797 10298.514 - 10360.930: 16.2232% ( 176) 00:32:42.797 10360.930 - 10423.345: 18.2143% ( 223) 00:32:42.797 10423.345 - 10485.760: 20.6875% ( 277) 00:32:42.797 10485.760 - 10548.175: 23.9732% ( 368) 00:32:42.797 10548.175 - 10610.590: 27.6250% ( 409) 00:32:42.797 10610.590 - 10673.006: 31.4196% ( 425) 00:32:42.797 10673.006 - 10735.421: 36.0179% ( 515) 00:32:42.797 10735.421 - 10797.836: 39.9375% ( 439) 00:32:42.797 10797.836 - 10860.251: 43.6250% ( 413) 00:32:42.797 10860.251 - 10922.667: 47.1071% ( 390) 00:32:42.797 10922.667 - 10985.082: 50.4554% ( 375) 00:32:42.797 10985.082 - 11047.497: 53.0893% ( 295) 00:32:42.797 11047.497 - 11109.912: 55.4196% ( 261) 00:32:42.797 11109.912 - 11172.328: 57.7679% ( 263) 00:32:42.797 11172.328 - 11234.743: 60.0268% ( 253) 00:32:42.797 11234.743 - 11297.158: 61.9554% ( 216) 00:32:42.797 11297.158 - 11359.573: 63.4554% ( 168) 00:32:42.797 11359.573 - 11421.989: 65.1786% ( 193) 00:32:42.797 11421.989 - 11484.404: 66.4107% ( 138) 00:32:42.797 11484.404 - 11546.819: 67.4911% ( 121) 00:32:42.797 11546.819 - 11609.234: 68.6607% ( 131) 00:32:42.797 11609.234 - 11671.650: 70.1518% ( 167) 00:32:42.797 11671.650 - 11734.065: 71.4554% ( 146) 00:32:42.797 11734.065 - 11796.480: 72.7589% ( 146) 00:32:42.797 11796.480 - 11858.895: 73.9018% ( 128) 00:32:42.797 11858.895 - 11921.310: 75.0625% ( 130) 00:32:42.797 11921.310 - 11983.726: 76.2589% ( 134) 00:32:42.797 11983.726 - 12046.141: 77.3214% ( 119) 00:32:42.797 12046.141 - 12108.556: 78.1786% ( 96) 00:32:42.797 12108.556 - 12170.971: 79.1607% ( 110) 00:32:42.797 12170.971 - 12233.387: 80.3214% ( 130) 00:32:42.797 12233.387 - 12295.802: 81.4286% ( 124) 00:32:42.797 12295.802 - 12358.217: 82.3214% ( 100) 00:32:42.797 12358.217 - 12420.632: 83.4464% ( 126) 00:32:42.797 12420.632 - 12483.048: 84.6875% ( 139) 00:32:42.797 12483.048 - 12545.463: 86.0625% ( 154) 00:32:42.797 12545.463 - 12607.878: 87.5446% ( 166) 00:32:42.797 12607.878 - 12670.293: 88.8393% ( 145) 00:32:42.797 12670.293 - 12732.709: 90.0089% ( 131) 00:32:42.797 12732.709 - 12795.124: 90.9464% ( 105) 00:32:42.797 12795.124 - 12857.539: 91.8661% ( 103) 00:32:42.797 12857.539 - 12919.954: 92.7768% ( 102) 00:32:42.797 12919.954 - 12982.370: 93.5625% ( 88) 00:32:42.797 12982.370 - 13044.785: 94.2768% ( 80) 00:32:42.797 13044.785 - 13107.200: 94.9732% ( 78) 00:32:42.797 13107.200 - 13169.615: 95.5625% ( 66) 00:32:42.797 13169.615 - 13232.030: 96.1518% ( 66) 00:32:42.797 13232.030 - 13294.446: 96.6518% ( 56) 00:32:42.797 13294.446 - 13356.861: 97.0268% ( 42) 00:32:42.797 13356.861 - 13419.276: 97.2768% ( 28) 00:32:42.797 13419.276 - 13481.691: 97.5982% ( 36) 00:32:42.797 13481.691 - 13544.107: 97.8482% ( 28) 00:32:42.797 13544.107 - 13606.522: 97.9821% ( 15) 00:32:42.797 13606.522 - 13668.937: 98.0804% ( 11) 00:32:42.797 13668.937 - 13731.352: 98.1607% ( 9) 00:32:42.797 13731.352 - 13793.768: 98.2054% ( 5) 00:32:42.797 13793.768 - 13856.183: 98.2411% ( 4) 00:32:42.797 13856.183 - 13918.598: 98.2589% ( 2) 00:32:42.797 13918.598 - 13981.013: 98.2857% ( 3) 00:32:42.797 15853.470 - 15915.886: 98.2946% ( 1) 00:32:42.797 15915.886 - 15978.301: 98.3125% ( 2) 00:32:42.797 15978.301 - 16103.131: 98.3571% ( 5) 00:32:42.797 16103.131 - 16227.962: 98.4107% ( 6) 00:32:42.797 16227.962 - 16352.792: 98.4464% ( 4) 00:32:42.797 16352.792 - 16477.623: 98.5000% ( 6) 00:32:42.797 16477.623 - 16602.453: 98.5268% ( 3) 00:32:42.797 16602.453 - 16727.284: 98.5714% ( 5) 00:32:42.797 16727.284 - 16852.114: 98.6518% ( 9) 00:32:42.797 16852.114 - 16976.945: 98.7321% ( 9) 00:32:42.797 16976.945 - 17101.775: 98.8393% ( 12) 00:32:42.797 17101.775 - 17226.606: 98.8571% ( 2) 00:32:42.797 27462.705 - 27587.535: 98.8661% ( 1) 00:32:42.797 27587.535 - 27712.366: 98.9018% ( 4) 00:32:42.797 27712.366 - 27837.196: 98.9375% ( 4) 00:32:42.797 27837.196 - 27962.027: 98.9821% ( 5) 00:32:42.797 27962.027 - 28086.857: 99.0179% ( 4) 00:32:42.797 28086.857 - 28211.688: 99.0536% ( 4) 00:32:42.797 28211.688 - 28336.518: 99.0982% ( 5) 00:32:42.797 28336.518 - 28461.349: 99.1339% ( 4) 00:32:42.797 28461.349 - 28586.179: 99.1786% ( 5) 00:32:42.797 28586.179 - 28711.010: 99.2143% ( 4) 00:32:42.797 28711.010 - 28835.840: 99.2500% ( 4) 00:32:42.797 28835.840 - 28960.670: 99.2857% ( 4) 00:32:42.797 28960.670 - 29085.501: 99.3304% ( 5) 00:32:42.797 29085.501 - 29210.331: 99.3661% ( 4) 00:32:42.797 29210.331 - 29335.162: 99.4107% ( 5) 00:32:42.797 29335.162 - 29459.992: 99.4286% ( 2) 00:32:42.797 34453.211 - 34702.872: 99.4643% ( 4) 00:32:42.797 34702.872 - 34952.533: 99.5446% ( 9) 00:32:42.798 34952.533 - 35202.194: 99.6071% ( 7) 00:32:42.798 35202.194 - 35451.855: 99.6875% ( 9) 00:32:42.798 35451.855 - 35701.516: 99.7589% ( 8) 00:32:42.798 35701.516 - 35951.177: 99.8393% ( 9) 00:32:42.798 35951.177 - 36200.838: 99.9107% ( 8) 00:32:42.798 36200.838 - 36450.499: 99.9911% ( 9) 00:32:42.798 36450.499 - 36700.160: 100.0000% ( 1) 00:32:42.798 00:32:42.798 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:32:42.798 ============================================================================== 00:32:42.798 Range in us Cumulative IO count 00:32:42.798 8925.379 - 8987.794: 0.0089% ( 1) 00:32:42.798 9112.625 - 9175.040: 0.0357% ( 3) 00:32:42.798 9175.040 - 9237.455: 0.0804% ( 5) 00:32:42.798 9237.455 - 9299.870: 0.1518% ( 8) 00:32:42.798 9299.870 - 9362.286: 0.2679% ( 13) 00:32:42.798 9362.286 - 9424.701: 0.5179% ( 28) 00:32:42.798 9424.701 - 9487.116: 0.9375% ( 47) 00:32:42.798 9487.116 - 9549.531: 1.4554% ( 58) 00:32:42.798 9549.531 - 9611.947: 2.1875% ( 82) 00:32:42.798 9611.947 - 9674.362: 3.4464% ( 141) 00:32:42.798 9674.362 - 9736.777: 4.3482% ( 101) 00:32:42.798 9736.777 - 9799.192: 5.3125% ( 108) 00:32:42.798 9799.192 - 9861.608: 6.8304% ( 170) 00:32:42.798 9861.608 - 9924.023: 8.2946% ( 164) 00:32:42.798 9924.023 - 9986.438: 9.3214% ( 115) 00:32:42.798 9986.438 - 10048.853: 10.1250% ( 90) 00:32:42.798 10048.853 - 10111.269: 11.0714% ( 106) 00:32:42.798 10111.269 - 10173.684: 12.0179% ( 106) 00:32:42.798 10173.684 - 10236.099: 13.3929% ( 154) 00:32:42.798 10236.099 - 10298.514: 15.0000% ( 180) 00:32:42.798 10298.514 - 10360.930: 16.7054% ( 191) 00:32:42.798 10360.930 - 10423.345: 19.0000% ( 257) 00:32:42.798 10423.345 - 10485.760: 21.5536% ( 286) 00:32:42.798 10485.760 - 10548.175: 24.8929% ( 374) 00:32:42.798 10548.175 - 10610.590: 28.6250% ( 418) 00:32:42.798 10610.590 - 10673.006: 32.4821% ( 432) 00:32:42.798 10673.006 - 10735.421: 36.7054% ( 473) 00:32:42.798 10735.421 - 10797.836: 40.0536% ( 375) 00:32:42.798 10797.836 - 10860.251: 43.5268% ( 389) 00:32:42.798 10860.251 - 10922.667: 46.8482% ( 372) 00:32:42.798 10922.667 - 10985.082: 49.5804% ( 306) 00:32:42.798 10985.082 - 11047.497: 52.3125% ( 306) 00:32:42.798 11047.497 - 11109.912: 54.7768% ( 276) 00:32:42.798 11109.912 - 11172.328: 57.3304% ( 286) 00:32:42.798 11172.328 - 11234.743: 59.1786% ( 207) 00:32:42.798 11234.743 - 11297.158: 61.1161% ( 217) 00:32:42.798 11297.158 - 11359.573: 62.7411% ( 182) 00:32:42.798 11359.573 - 11421.989: 64.2946% ( 174) 00:32:42.798 11421.989 - 11484.404: 65.9107% ( 181) 00:32:42.798 11484.404 - 11546.819: 67.5000% ( 178) 00:32:42.798 11546.819 - 11609.234: 69.0000% ( 168) 00:32:42.798 11609.234 - 11671.650: 70.5268% ( 171) 00:32:42.798 11671.650 - 11734.065: 71.8839% ( 152) 00:32:42.798 11734.065 - 11796.480: 73.0179% ( 127) 00:32:42.798 11796.480 - 11858.895: 74.2143% ( 134) 00:32:42.798 11858.895 - 11921.310: 75.3393% ( 126) 00:32:42.798 11921.310 - 11983.726: 76.2143% ( 98) 00:32:42.798 11983.726 - 12046.141: 77.1250% ( 102) 00:32:42.798 12046.141 - 12108.556: 77.8571% ( 82) 00:32:42.798 12108.556 - 12170.971: 78.8661% ( 113) 00:32:42.798 12170.971 - 12233.387: 79.9375% ( 120) 00:32:42.798 12233.387 - 12295.802: 81.3304% ( 156) 00:32:42.798 12295.802 - 12358.217: 82.5804% ( 140) 00:32:42.798 12358.217 - 12420.632: 83.9554% ( 154) 00:32:42.798 12420.632 - 12483.048: 85.2857% ( 149) 00:32:42.798 12483.048 - 12545.463: 86.3393% ( 118) 00:32:42.798 12545.463 - 12607.878: 87.3929% ( 118) 00:32:42.798 12607.878 - 12670.293: 88.2321% ( 94) 00:32:42.798 12670.293 - 12732.709: 89.1964% ( 108) 00:32:42.798 12732.709 - 12795.124: 90.2589% ( 119) 00:32:42.798 12795.124 - 12857.539: 91.2411% ( 110) 00:32:42.798 12857.539 - 12919.954: 92.2054% ( 108) 00:32:42.798 12919.954 - 12982.370: 93.2321% ( 115) 00:32:42.798 12982.370 - 13044.785: 94.2768% ( 117) 00:32:42.798 13044.785 - 13107.200: 94.9464% ( 75) 00:32:42.798 13107.200 - 13169.615: 95.5536% ( 68) 00:32:42.798 13169.615 - 13232.030: 96.0268% ( 53) 00:32:42.798 13232.030 - 13294.446: 96.4375% ( 46) 00:32:42.798 13294.446 - 13356.861: 96.7857% ( 39) 00:32:42.798 13356.861 - 13419.276: 97.0446% ( 29) 00:32:42.798 13419.276 - 13481.691: 97.2768% ( 26) 00:32:42.798 13481.691 - 13544.107: 97.4554% ( 20) 00:32:42.798 13544.107 - 13606.522: 97.6429% ( 21) 00:32:42.798 13606.522 - 13668.937: 97.8482% ( 23) 00:32:42.798 13668.937 - 13731.352: 97.9554% ( 12) 00:32:42.798 13731.352 - 13793.768: 98.0893% ( 15) 00:32:42.798 13793.768 - 13856.183: 98.1786% ( 10) 00:32:42.798 13856.183 - 13918.598: 98.2500% ( 8) 00:32:42.798 13918.598 - 13981.013: 98.2768% ( 3) 00:32:42.798 13981.013 - 14043.429: 98.2857% ( 1) 00:32:42.798 16727.284 - 16852.114: 98.3214% ( 4) 00:32:42.798 16852.114 - 16976.945: 98.3661% ( 5) 00:32:42.798 16976.945 - 17101.775: 98.4196% ( 6) 00:32:42.798 17101.775 - 17226.606: 98.4643% ( 5) 00:32:42.798 17226.606 - 17351.436: 98.5179% ( 6) 00:32:42.798 17351.436 - 17476.267: 98.5536% ( 4) 00:32:42.798 17476.267 - 17601.097: 98.5893% ( 4) 00:32:42.798 17601.097 - 17725.928: 98.6339% ( 5) 00:32:42.798 17725.928 - 17850.758: 98.7054% ( 8) 00:32:42.798 17850.758 - 17975.589: 98.7768% ( 8) 00:32:42.798 17975.589 - 18100.419: 98.8125% ( 4) 00:32:42.798 18100.419 - 18225.250: 98.8393% ( 3) 00:32:42.798 18225.250 - 18350.080: 98.8571% ( 2) 00:32:42.798 25715.078 - 25839.909: 98.8929% ( 4) 00:32:42.798 25839.909 - 25964.739: 98.9286% ( 4) 00:32:42.798 25964.739 - 26089.570: 98.9732% ( 5) 00:32:42.798 26089.570 - 26214.400: 99.0089% ( 4) 00:32:42.798 26214.400 - 26339.230: 99.0446% ( 4) 00:32:42.798 26339.230 - 26464.061: 99.0714% ( 3) 00:32:42.798 26464.061 - 26588.891: 99.1161% ( 5) 00:32:42.798 26588.891 - 26713.722: 99.1518% ( 4) 00:32:42.798 26713.722 - 26838.552: 99.1964% ( 5) 00:32:42.798 26838.552 - 26963.383: 99.2321% ( 4) 00:32:42.798 26963.383 - 27088.213: 99.2679% ( 4) 00:32:42.798 27088.213 - 27213.044: 99.3036% ( 4) 00:32:42.798 27213.044 - 27337.874: 99.3393% ( 4) 00:32:42.798 27337.874 - 27462.705: 99.3839% ( 5) 00:32:42.798 27462.705 - 27587.535: 99.4107% ( 3) 00:32:42.798 27587.535 - 27712.366: 99.4286% ( 2) 00:32:42.798 32455.924 - 32705.585: 99.4554% ( 3) 00:32:42.798 32705.585 - 32955.246: 99.5268% ( 8) 00:32:42.798 32955.246 - 33204.907: 99.6071% ( 9) 00:32:42.798 33204.907 - 33454.568: 99.6875% ( 9) 00:32:42.798 33454.568 - 33704.229: 99.7589% ( 8) 00:32:42.798 33704.229 - 33953.890: 99.8393% ( 9) 00:32:42.798 33953.890 - 34203.550: 99.9107% ( 8) 00:32:42.799 34203.550 - 34453.211: 99.9821% ( 8) 00:32:42.799 34453.211 - 34702.872: 100.0000% ( 2) 00:32:42.799 00:32:42.799 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:32:42.799 ============================================================================== 00:32:42.799 Range in us Cumulative IO count 00:32:42.799 9112.625 - 9175.040: 0.0089% ( 1) 00:32:42.799 9175.040 - 9237.455: 0.0179% ( 1) 00:32:42.799 9237.455 - 9299.870: 0.0268% ( 1) 00:32:42.799 9299.870 - 9362.286: 0.1429% ( 13) 00:32:42.799 9362.286 - 9424.701: 0.3661% ( 25) 00:32:42.799 9424.701 - 9487.116: 0.6786% ( 35) 00:32:42.799 9487.116 - 9549.531: 1.1339% ( 51) 00:32:42.799 9549.531 - 9611.947: 1.8839% ( 84) 00:32:42.799 9611.947 - 9674.362: 3.1429% ( 141) 00:32:42.799 9674.362 - 9736.777: 4.1161% ( 109) 00:32:42.799 9736.777 - 9799.192: 5.8482% ( 194) 00:32:42.799 9799.192 - 9861.608: 6.6875% ( 94) 00:32:42.799 9861.608 - 9924.023: 8.1429% ( 163) 00:32:42.799 9924.023 - 9986.438: 9.0357% ( 100) 00:32:42.799 9986.438 - 10048.853: 9.6518% ( 69) 00:32:42.799 10048.853 - 10111.269: 10.7500% ( 123) 00:32:42.799 10111.269 - 10173.684: 11.8036% ( 118) 00:32:42.799 10173.684 - 10236.099: 13.0982% ( 145) 00:32:42.799 10236.099 - 10298.514: 14.8304% ( 194) 00:32:42.799 10298.514 - 10360.930: 16.8393% ( 225) 00:32:42.799 10360.930 - 10423.345: 19.3125% ( 277) 00:32:42.799 10423.345 - 10485.760: 22.3214% ( 337) 00:32:42.799 10485.760 - 10548.175: 25.8571% ( 396) 00:32:42.799 10548.175 - 10610.590: 29.1071% ( 364) 00:32:42.799 10610.590 - 10673.006: 32.8482% ( 419) 00:32:42.799 10673.006 - 10735.421: 36.8036% ( 443) 00:32:42.799 10735.421 - 10797.836: 40.5179% ( 416) 00:32:42.799 10797.836 - 10860.251: 44.1071% ( 402) 00:32:42.799 10860.251 - 10922.667: 47.3304% ( 361) 00:32:42.799 10922.667 - 10985.082: 50.3304% ( 336) 00:32:42.799 10985.082 - 11047.497: 53.1071% ( 311) 00:32:42.799 11047.497 - 11109.912: 55.3571% ( 252) 00:32:42.799 11109.912 - 11172.328: 57.3929% ( 228) 00:32:42.799 11172.328 - 11234.743: 59.1964% ( 202) 00:32:42.799 11234.743 - 11297.158: 60.9196% ( 193) 00:32:42.799 11297.158 - 11359.573: 62.6875% ( 198) 00:32:42.799 11359.573 - 11421.989: 64.2500% ( 175) 00:32:42.799 11421.989 - 11484.404: 65.7232% ( 165) 00:32:42.799 11484.404 - 11546.819: 66.9375% ( 136) 00:32:42.799 11546.819 - 11609.234: 68.2411% ( 146) 00:32:42.799 11609.234 - 11671.650: 69.5179% ( 143) 00:32:42.799 11671.650 - 11734.065: 70.6875% ( 131) 00:32:42.799 11734.065 - 11796.480: 72.0179% ( 149) 00:32:42.799 11796.480 - 11858.895: 73.4107% ( 156) 00:32:42.799 11858.895 - 11921.310: 74.6429% ( 138) 00:32:42.799 11921.310 - 11983.726: 75.8036% ( 130) 00:32:42.799 11983.726 - 12046.141: 76.9286% ( 126) 00:32:42.799 12046.141 - 12108.556: 78.2054% ( 143) 00:32:42.799 12108.556 - 12170.971: 79.3482% ( 128) 00:32:42.799 12170.971 - 12233.387: 80.4107% ( 119) 00:32:42.799 12233.387 - 12295.802: 81.7946% ( 155) 00:32:42.799 12295.802 - 12358.217: 83.1518% ( 152) 00:32:42.799 12358.217 - 12420.632: 84.1875% ( 116) 00:32:42.799 12420.632 - 12483.048: 85.3125% ( 126) 00:32:42.799 12483.048 - 12545.463: 86.4286% ( 125) 00:32:42.799 12545.463 - 12607.878: 87.4375% ( 113) 00:32:42.799 12607.878 - 12670.293: 88.2768% ( 94) 00:32:42.799 12670.293 - 12732.709: 89.2143% ( 105) 00:32:42.799 12732.709 - 12795.124: 90.0982% ( 99) 00:32:42.799 12795.124 - 12857.539: 90.9732% ( 98) 00:32:42.799 12857.539 - 12919.954: 91.9643% ( 111) 00:32:42.799 12919.954 - 12982.370: 92.8839% ( 103) 00:32:42.799 12982.370 - 13044.785: 93.7946% ( 102) 00:32:42.799 13044.785 - 13107.200: 94.4821% ( 77) 00:32:42.799 13107.200 - 13169.615: 95.1518% ( 75) 00:32:42.799 13169.615 - 13232.030: 95.6429% ( 55) 00:32:42.799 13232.030 - 13294.446: 96.0268% ( 43) 00:32:42.799 13294.446 - 13356.861: 96.3750% ( 39) 00:32:42.799 13356.861 - 13419.276: 96.6875% ( 35) 00:32:42.799 13419.276 - 13481.691: 96.9911% ( 34) 00:32:42.799 13481.691 - 13544.107: 97.3036% ( 35) 00:32:42.799 13544.107 - 13606.522: 97.4911% ( 21) 00:32:42.799 13606.522 - 13668.937: 97.6339% ( 16) 00:32:42.799 13668.937 - 13731.352: 97.7857% ( 17) 00:32:42.799 13731.352 - 13793.768: 97.8750% ( 10) 00:32:42.799 13793.768 - 13856.183: 97.9464% ( 8) 00:32:42.799 13856.183 - 13918.598: 98.0268% ( 9) 00:32:42.799 13918.598 - 13981.013: 98.0982% ( 8) 00:32:42.799 13981.013 - 14043.429: 98.1607% ( 7) 00:32:42.799 14043.429 - 14105.844: 98.2232% ( 7) 00:32:42.799 14105.844 - 14168.259: 98.2589% ( 4) 00:32:42.799 14168.259 - 14230.674: 98.2768% ( 2) 00:32:42.799 14230.674 - 14293.090: 98.2857% ( 1) 00:32:42.799 17476.267 - 17601.097: 98.3304% ( 5) 00:32:42.799 17601.097 - 17725.928: 98.3661% ( 4) 00:32:42.799 17725.928 - 17850.758: 98.3929% ( 3) 00:32:42.799 17850.758 - 17975.589: 98.4375% ( 5) 00:32:42.799 17975.589 - 18100.419: 98.4911% ( 6) 00:32:42.799 18100.419 - 18225.250: 98.5268% ( 4) 00:32:42.799 18225.250 - 18350.080: 98.5625% ( 4) 00:32:42.799 18350.080 - 18474.910: 98.6071% ( 5) 00:32:42.799 18474.910 - 18599.741: 98.7054% ( 11) 00:32:42.799 18599.741 - 18724.571: 98.7857% ( 9) 00:32:42.799 18724.571 - 18849.402: 98.8393% ( 6) 00:32:42.799 18849.402 - 18974.232: 98.8571% ( 2) 00:32:42.799 23842.621 - 23967.451: 98.8661% ( 1) 00:32:42.799 23967.451 - 24092.282: 98.9018% ( 4) 00:32:42.799 24092.282 - 24217.112: 98.9464% ( 5) 00:32:42.799 24217.112 - 24341.943: 98.9821% ( 4) 00:32:42.799 24341.943 - 24466.773: 99.0179% ( 4) 00:32:42.799 24466.773 - 24591.604: 99.0536% ( 4) 00:32:42.799 24591.604 - 24716.434: 99.0982% ( 5) 00:32:42.799 24716.434 - 24841.265: 99.1339% ( 4) 00:32:42.799 24841.265 - 24966.095: 99.1786% ( 5) 00:32:42.799 24966.095 - 25090.926: 99.2143% ( 4) 00:32:42.799 25090.926 - 25215.756: 99.2589% ( 5) 00:32:42.799 25215.756 - 25340.587: 99.2946% ( 4) 00:32:42.799 25340.587 - 25465.417: 99.3304% ( 4) 00:32:42.799 25465.417 - 25590.248: 99.3661% ( 4) 00:32:42.799 25590.248 - 25715.078: 99.4018% ( 4) 00:32:42.799 25715.078 - 25839.909: 99.4286% ( 3) 00:32:42.799 30708.297 - 30833.128: 99.4643% ( 4) 00:32:42.799 30833.128 - 30957.958: 99.5089% ( 5) 00:32:42.799 30957.958 - 31082.789: 99.5446% ( 4) 00:32:42.799 31082.789 - 31207.619: 99.5804% ( 4) 00:32:42.799 31207.619 - 31332.450: 99.6161% ( 4) 00:32:42.799 31332.450 - 31457.280: 99.6607% ( 5) 00:32:42.799 31457.280 - 31582.110: 99.6964% ( 4) 00:32:42.799 31582.110 - 31706.941: 99.7321% ( 4) 00:32:42.799 31706.941 - 31831.771: 99.7768% ( 5) 00:32:42.799 31831.771 - 31956.602: 99.8125% ( 4) 00:32:42.799 31956.602 - 32206.263: 99.8929% ( 9) 00:32:42.799 32206.263 - 32455.924: 99.9643% ( 8) 00:32:42.799 32455.924 - 32705.585: 100.0000% ( 4) 00:32:42.799 00:32:42.799 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:32:42.799 ============================================================================== 00:32:42.799 Range in us Cumulative IO count 00:32:42.799 9112.625 - 9175.040: 0.0089% ( 1) 00:32:42.799 9175.040 - 9237.455: 0.0266% ( 2) 00:32:42.799 9299.870 - 9362.286: 0.0533% ( 3) 00:32:42.799 9362.286 - 9424.701: 0.2752% ( 25) 00:32:42.799 9424.701 - 9487.116: 0.5593% ( 32) 00:32:42.799 9487.116 - 9549.531: 0.9322% ( 42) 00:32:42.799 9549.531 - 9611.947: 1.6868% ( 85) 00:32:42.799 9611.947 - 9674.362: 3.1428% ( 164) 00:32:42.799 9674.362 - 9736.777: 4.0749% ( 105) 00:32:42.799 9736.777 - 9799.192: 5.2202% ( 129) 00:32:42.799 9799.192 - 9861.608: 7.1200% ( 214) 00:32:42.799 9861.608 - 9924.023: 8.2386% ( 126) 00:32:42.799 9924.023 - 9986.438: 9.0732% ( 94) 00:32:42.799 9986.438 - 10048.853: 9.8722% ( 90) 00:32:42.799 10048.853 - 10111.269: 10.6445% ( 87) 00:32:42.799 10111.269 - 10173.684: 12.1449% ( 169) 00:32:42.799 10173.684 - 10236.099: 13.2102% ( 120) 00:32:42.799 10236.099 - 10298.514: 14.6662% ( 164) 00:32:42.799 10298.514 - 10360.930: 16.4506% ( 201) 00:32:42.799 10360.930 - 10423.345: 19.1673% ( 306) 00:32:42.799 10423.345 - 10485.760: 21.8750% ( 305) 00:32:42.799 10485.760 - 10548.175: 24.9734% ( 349) 00:32:42.799 10548.175 - 10610.590: 28.7908% ( 430) 00:32:42.799 10610.590 - 10673.006: 32.4130% ( 408) 00:32:42.799 10673.006 - 10735.421: 36.8963% ( 505) 00:32:42.799 10735.421 - 10797.836: 41.0067% ( 463) 00:32:42.799 10797.836 - 10860.251: 44.5579% ( 400) 00:32:42.799 10860.251 - 10922.667: 47.6740% ( 351) 00:32:42.799 10922.667 - 10985.082: 50.4705% ( 315) 00:32:42.799 10985.082 - 11047.497: 53.0273% ( 288) 00:32:42.799 11047.497 - 11109.912: 55.0249% ( 225) 00:32:42.799 11109.912 - 11172.328: 57.3775% ( 265) 00:32:42.799 11172.328 - 11234.743: 59.5437% ( 244) 00:32:42.799 11234.743 - 11297.158: 61.3548% ( 204) 00:32:42.799 11297.158 - 11359.573: 62.8374% ( 167) 00:32:42.799 11359.573 - 11421.989: 64.3111% ( 166) 00:32:42.799 11421.989 - 11484.404: 65.8913% ( 178) 00:32:42.799 11484.404 - 11546.819: 67.0543% ( 131) 00:32:42.799 11546.819 - 11609.234: 68.2440% ( 134) 00:32:42.799 11609.234 - 11671.650: 69.2472% ( 113) 00:32:42.799 11671.650 - 11734.065: 70.3125% ( 120) 00:32:42.799 11734.065 - 11796.480: 71.4933% ( 133) 00:32:42.799 11796.480 - 11858.895: 72.7184% ( 138) 00:32:42.799 11858.895 - 11921.310: 73.9790% ( 142) 00:32:42.799 11921.310 - 11983.726: 75.1332% ( 130) 00:32:42.799 11983.726 - 12046.141: 76.2784% ( 129) 00:32:42.799 12046.141 - 12108.556: 77.7344% ( 164) 00:32:42.799 12108.556 - 12170.971: 79.0572% ( 149) 00:32:42.799 12170.971 - 12233.387: 80.1847% ( 127) 00:32:42.799 12233.387 - 12295.802: 81.4808% ( 146) 00:32:42.799 12295.802 - 12358.217: 82.7947% ( 148) 00:32:42.800 12358.217 - 12420.632: 84.2596% ( 165) 00:32:42.800 12420.632 - 12483.048: 85.3960% ( 128) 00:32:42.800 12483.048 - 12545.463: 86.7099% ( 148) 00:32:42.800 12545.463 - 12607.878: 87.8462% ( 128) 00:32:42.800 12607.878 - 12670.293: 88.9648% ( 126) 00:32:42.800 12670.293 - 12732.709: 90.0391% ( 121) 00:32:42.800 12732.709 - 12795.124: 90.9535% ( 103) 00:32:42.800 12795.124 - 12857.539: 91.8857% ( 105) 00:32:42.800 12857.539 - 12919.954: 92.6491% ( 86) 00:32:42.800 12919.954 - 12982.370: 93.4304% ( 88) 00:32:42.800 12982.370 - 13044.785: 94.1495% ( 81) 00:32:42.800 13044.785 - 13107.200: 94.7354% ( 66) 00:32:42.800 13107.200 - 13169.615: 95.2148% ( 54) 00:32:42.800 13169.615 - 13232.030: 95.6410% ( 48) 00:32:42.800 13232.030 - 13294.446: 96.0582% ( 47) 00:32:42.800 13294.446 - 13356.861: 96.3601% ( 34) 00:32:42.800 13356.861 - 13419.276: 96.6797% ( 36) 00:32:42.800 13419.276 - 13481.691: 96.9904% ( 35) 00:32:42.800 13481.691 - 13544.107: 97.1413% ( 17) 00:32:42.800 13544.107 - 13606.522: 97.2656% ( 14) 00:32:42.800 13606.522 - 13668.937: 97.3544% ( 10) 00:32:42.800 13668.937 - 13731.352: 97.4787% ( 14) 00:32:42.800 13731.352 - 13793.768: 97.5586% ( 9) 00:32:42.800 13793.768 - 13856.183: 97.6385% ( 9) 00:32:42.800 13856.183 - 13918.598: 97.7095% ( 8) 00:32:42.800 13918.598 - 13981.013: 97.7539% ( 5) 00:32:42.800 13981.013 - 14043.429: 97.8072% ( 6) 00:32:42.800 14043.429 - 14105.844: 97.8516% ( 5) 00:32:42.800 14105.844 - 14168.259: 97.8960% ( 5) 00:32:42.800 14168.259 - 14230.674: 97.9137% ( 2) 00:32:42.800 14230.674 - 14293.090: 97.9403% ( 3) 00:32:42.800 14293.090 - 14355.505: 97.9581% ( 2) 00:32:42.800 14355.505 - 14417.920: 97.9847% ( 3) 00:32:42.800 14417.920 - 14480.335: 97.9936% ( 1) 00:32:42.800 14480.335 - 14542.750: 98.0114% ( 2) 00:32:42.800 14542.750 - 14605.166: 98.0380% ( 3) 00:32:42.800 14605.166 - 14667.581: 98.0558% ( 2) 00:32:42.800 14667.581 - 14729.996: 98.1090% ( 6) 00:32:42.800 14729.996 - 14792.411: 98.1268% ( 2) 00:32:42.800 14792.411 - 14854.827: 98.2067% ( 9) 00:32:42.800 14854.827 - 14917.242: 98.2511% ( 5) 00:32:42.800 14917.242 - 14979.657: 98.2599% ( 1) 00:32:42.800 14979.657 - 15042.072: 98.2777% ( 2) 00:32:42.800 15042.072 - 15104.488: 98.2955% ( 2) 00:32:42.800 16103.131 - 16227.962: 98.3043% ( 1) 00:32:42.800 16227.962 - 16352.792: 98.3310% ( 3) 00:32:42.800 16352.792 - 16477.623: 98.3665% ( 4) 00:32:42.800 16477.623 - 16602.453: 98.4020% ( 4) 00:32:42.800 16602.453 - 16727.284: 98.4375% ( 4) 00:32:42.800 16727.284 - 16852.114: 98.4730% ( 4) 00:32:42.800 16852.114 - 16976.945: 98.5085% ( 4) 00:32:42.800 16976.945 - 17101.775: 98.5529% ( 5) 00:32:42.800 17101.775 - 17226.606: 98.5795% ( 3) 00:32:42.800 17226.606 - 17351.436: 98.6151% ( 4) 00:32:42.800 17351.436 - 17476.267: 98.6506% ( 4) 00:32:42.800 17476.267 - 17601.097: 98.6861% ( 4) 00:32:42.800 17601.097 - 17725.928: 98.7216% ( 4) 00:32:42.800 17725.928 - 17850.758: 98.7571% ( 4) 00:32:42.800 17850.758 - 17975.589: 98.7926% ( 4) 00:32:42.800 17975.589 - 18100.419: 98.8281% ( 4) 00:32:42.800 18100.419 - 18225.250: 98.8636% ( 4) 00:32:42.800 18225.250 - 18350.080: 98.8725% ( 1) 00:32:42.800 18350.080 - 18474.910: 98.9080% ( 4) 00:32:42.800 18474.910 - 18599.741: 98.9524% ( 5) 00:32:42.800 18599.741 - 18724.571: 99.0057% ( 6) 00:32:42.800 18724.571 - 18849.402: 99.0412% ( 4) 00:32:42.800 18849.402 - 18974.232: 99.0767% ( 4) 00:32:42.800 18974.232 - 19099.063: 99.1122% ( 4) 00:32:42.800 19099.063 - 19223.893: 99.1477% ( 4) 00:32:42.800 19223.893 - 19348.724: 99.1921% ( 5) 00:32:42.800 19348.724 - 19473.554: 99.2276% ( 4) 00:32:42.800 19473.554 - 19598.385: 99.2809% ( 6) 00:32:42.800 19598.385 - 19723.215: 99.3519% ( 8) 00:32:42.800 19723.215 - 19848.046: 99.3874% ( 4) 00:32:42.800 19848.046 - 19972.876: 99.4052% ( 2) 00:32:42.800 19972.876 - 20097.707: 99.4229% ( 2) 00:32:42.800 20097.707 - 20222.537: 99.4318% ( 1) 00:32:42.800 23218.469 - 23343.299: 99.4407% ( 1) 00:32:42.800 23343.299 - 23468.130: 99.4762% ( 4) 00:32:42.800 23468.130 - 23592.960: 99.5206% ( 5) 00:32:42.800 23592.960 - 23717.790: 99.5561% ( 4) 00:32:42.800 23717.790 - 23842.621: 99.5916% ( 4) 00:32:42.800 23842.621 - 23967.451: 99.6271% ( 4) 00:32:42.800 23967.451 - 24092.282: 99.6626% ( 4) 00:32:42.800 24092.282 - 24217.112: 99.6982% ( 4) 00:32:42.800 24217.112 - 24341.943: 99.7425% ( 5) 00:32:42.800 24341.943 - 24466.773: 99.7781% ( 4) 00:32:42.800 24466.773 - 24591.604: 99.8136% ( 4) 00:32:42.800 24591.604 - 24716.434: 99.8580% ( 5) 00:32:42.800 24716.434 - 24841.265: 99.8935% ( 4) 00:32:42.800 24841.265 - 24966.095: 99.9290% ( 4) 00:32:42.800 24966.095 - 25090.926: 99.9734% ( 5) 00:32:42.800 25090.926 - 25215.756: 100.0000% ( 3) 00:32:42.800 00:32:42.800 13:53:39 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:32:42.800 00:32:42.800 real 0m2.830s 00:32:42.800 user 0m2.350s 00:32:42.800 sys 0m0.367s 00:32:42.800 13:53:39 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:42.800 13:53:39 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:32:42.800 ************************************ 00:32:42.800 END TEST nvme_perf 00:32:42.800 ************************************ 00:32:42.800 13:53:39 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:32:42.800 13:53:39 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:42.800 13:53:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:42.800 13:53:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:42.800 ************************************ 00:32:42.800 START TEST nvme_hello_world 00:32:42.800 ************************************ 00:32:42.800 13:53:40 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:32:43.060 Initializing NVMe Controllers 00:32:43.060 Attached to 0000:00:10.0 00:32:43.060 Namespace ID: 1 size: 6GB 00:32:43.060 Attached to 0000:00:11.0 00:32:43.060 Namespace ID: 1 size: 5GB 00:32:43.060 Attached to 0000:00:13.0 00:32:43.060 Namespace ID: 1 size: 1GB 00:32:43.060 Attached to 0000:00:12.0 00:32:43.060 Namespace ID: 1 size: 4GB 00:32:43.060 Namespace ID: 2 size: 4GB 00:32:43.060 Namespace ID: 3 size: 4GB 00:32:43.060 Initialization complete. 00:32:43.060 INFO: using host memory buffer for IO 00:32:43.060 Hello world! 00:32:43.060 INFO: using host memory buffer for IO 00:32:43.060 Hello world! 00:32:43.060 INFO: using host memory buffer for IO 00:32:43.060 Hello world! 00:32:43.060 INFO: using host memory buffer for IO 00:32:43.060 Hello world! 00:32:43.060 INFO: using host memory buffer for IO 00:32:43.060 Hello world! 00:32:43.060 INFO: using host memory buffer for IO 00:32:43.060 Hello world! 00:32:43.319 ************************************ 00:32:43.319 END TEST nvme_hello_world 00:32:43.319 ************************************ 00:32:43.319 00:32:43.319 real 0m0.412s 00:32:43.319 user 0m0.157s 00:32:43.319 sys 0m0.193s 00:32:43.319 13:53:40 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:43.319 13:53:40 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:32:43.319 13:53:40 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:32:43.319 13:53:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:43.319 13:53:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:43.319 13:53:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:43.319 ************************************ 00:32:43.319 START TEST nvme_sgl 00:32:43.319 ************************************ 00:32:43.319 13:53:40 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:32:43.578 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:32:43.578 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:32:43.578 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:32:43.578 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:32:43.578 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:32:43.578 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:32:43.578 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:32:43.578 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:32:43.578 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:32:43.578 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:32:43.578 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:32:43.578 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:32:43.578 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:32:43.578 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:32:43.578 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:32:43.578 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:32:43.578 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:32:43.578 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:32:43.578 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:32:43.578 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:32:43.578 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:32:43.578 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:32:43.578 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:32:43.578 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:32:43.578 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:32:43.578 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:32:43.578 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:32:43.578 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:32:43.578 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:32:43.578 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:32:43.578 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:32:43.578 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:32:43.578 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:32:43.578 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:32:43.578 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:32:43.578 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:32:43.838 NVMe Readv/Writev Request test 00:32:43.838 Attached to 0000:00:10.0 00:32:43.838 Attached to 0000:00:11.0 00:32:43.838 Attached to 0000:00:13.0 00:32:43.838 Attached to 0000:00:12.0 00:32:43.838 0000:00:10.0: build_io_request_2 test passed 00:32:43.838 0000:00:10.0: build_io_request_4 test passed 00:32:43.838 0000:00:10.0: build_io_request_5 test passed 00:32:43.838 0000:00:10.0: build_io_request_6 test passed 00:32:43.838 0000:00:10.0: build_io_request_7 test passed 00:32:43.838 0000:00:10.0: build_io_request_10 test passed 00:32:43.838 0000:00:11.0: build_io_request_2 test passed 00:32:43.838 0000:00:11.0: build_io_request_4 test passed 00:32:43.838 0000:00:11.0: build_io_request_5 test passed 00:32:43.838 0000:00:11.0: build_io_request_6 test passed 00:32:43.838 0000:00:11.0: build_io_request_7 test passed 00:32:43.838 0000:00:11.0: build_io_request_10 test passed 00:32:43.838 Cleaning up... 00:32:43.838 00:32:43.838 real 0m0.473s 00:32:43.838 user 0m0.240s 00:32:43.838 sys 0m0.185s 00:32:43.838 13:53:40 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:43.838 13:53:40 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:32:43.838 ************************************ 00:32:43.838 END TEST nvme_sgl 00:32:43.838 ************************************ 00:32:43.838 13:53:40 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:32:43.838 13:53:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:43.838 13:53:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:43.838 13:53:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:43.838 ************************************ 00:32:43.838 START TEST nvme_e2edp 00:32:43.838 ************************************ 00:32:43.838 13:53:41 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:32:44.097 NVMe Write/Read with End-to-End data protection test 00:32:44.097 Attached to 0000:00:10.0 00:32:44.097 Attached to 0000:00:11.0 00:32:44.097 Attached to 0000:00:13.0 00:32:44.097 Attached to 0000:00:12.0 00:32:44.097 Cleaning up... 00:32:44.097 ************************************ 00:32:44.097 END TEST nvme_e2edp 00:32:44.097 ************************************ 00:32:44.097 00:32:44.097 real 0m0.377s 00:32:44.097 user 0m0.133s 00:32:44.097 sys 0m0.192s 00:32:44.097 13:53:41 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:44.097 13:53:41 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:32:44.356 13:53:41 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:32:44.356 13:53:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:44.356 13:53:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:44.356 13:53:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:44.356 ************************************ 00:32:44.356 START TEST nvme_reserve 00:32:44.356 ************************************ 00:32:44.356 13:53:41 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:32:44.615 ===================================================== 00:32:44.615 NVMe Controller at PCI bus 0, device 16, function 0 00:32:44.615 ===================================================== 00:32:44.615 Reservations: Not Supported 00:32:44.615 ===================================================== 00:32:44.615 NVMe Controller at PCI bus 0, device 17, function 0 00:32:44.615 ===================================================== 00:32:44.615 Reservations: Not Supported 00:32:44.615 ===================================================== 00:32:44.615 NVMe Controller at PCI bus 0, device 19, function 0 00:32:44.615 ===================================================== 00:32:44.615 Reservations: Not Supported 00:32:44.615 ===================================================== 00:32:44.615 NVMe Controller at PCI bus 0, device 18, function 0 00:32:44.615 ===================================================== 00:32:44.615 Reservations: Not Supported 00:32:44.615 Reservation test passed 00:32:44.615 ************************************ 00:32:44.615 END TEST nvme_reserve 00:32:44.615 ************************************ 00:32:44.615 00:32:44.615 real 0m0.373s 00:32:44.615 user 0m0.144s 00:32:44.615 sys 0m0.179s 00:32:44.615 13:53:41 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:44.615 13:53:41 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:32:44.615 13:53:41 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:32:44.615 13:53:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:44.615 13:53:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:44.615 13:53:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:44.615 ************************************ 00:32:44.615 START TEST nvme_err_injection 00:32:44.615 ************************************ 00:32:44.615 13:53:41 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:32:45.183 NVMe Error Injection test 00:32:45.183 Attached to 0000:00:10.0 00:32:45.183 Attached to 0000:00:11.0 00:32:45.183 Attached to 0000:00:13.0 00:32:45.183 Attached to 0000:00:12.0 00:32:45.183 0000:00:11.0: get features failed as expected 00:32:45.183 0000:00:13.0: get features failed as expected 00:32:45.183 0000:00:12.0: get features failed as expected 00:32:45.183 0000:00:10.0: get features failed as expected 00:32:45.183 0000:00:12.0: get features successfully as expected 00:32:45.183 0000:00:10.0: get features successfully as expected 00:32:45.183 0000:00:11.0: get features successfully as expected 00:32:45.183 0000:00:13.0: get features successfully as expected 00:32:45.183 0000:00:10.0: read failed as expected 00:32:45.183 0000:00:11.0: read failed as expected 00:32:45.183 0000:00:13.0: read failed as expected 00:32:45.183 0000:00:12.0: read failed as expected 00:32:45.183 0000:00:10.0: read successfully as expected 00:32:45.183 0000:00:11.0: read successfully as expected 00:32:45.183 0000:00:13.0: read successfully as expected 00:32:45.183 0000:00:12.0: read successfully as expected 00:32:45.183 Cleaning up... 00:32:45.183 00:32:45.183 real 0m0.386s 00:32:45.183 user 0m0.153s 00:32:45.183 sys 0m0.188s 00:32:45.183 13:53:42 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.183 13:53:42 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:32:45.183 ************************************ 00:32:45.183 END TEST nvme_err_injection 00:32:45.184 ************************************ 00:32:45.184 13:53:42 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:32:45.184 13:53:42 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:32:45.184 13:53:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.184 13:53:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:45.184 ************************************ 00:32:45.184 START TEST nvme_overhead 00:32:45.184 ************************************ 00:32:45.184 13:53:42 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:32:46.560 Initializing NVMe Controllers 00:32:46.560 Attached to 0000:00:10.0 00:32:46.560 Attached to 0000:00:11.0 00:32:46.560 Attached to 0000:00:13.0 00:32:46.560 Attached to 0000:00:12.0 00:32:46.560 Initialization complete. Launching workers. 00:32:46.560 submit (in ns) avg, min, max = 14862.8, 11906.7, 117706.7 00:32:46.560 complete (in ns) avg, min, max = 10125.3, 7889.5, 1956972.4 00:32:46.560 00:32:46.560 Submit histogram 00:32:46.560 ================ 00:32:46.560 Range in us Cumulative Count 00:32:46.560 11.886 - 11.947: 0.0124% ( 1) 00:32:46.560 12.008 - 12.069: 0.1245% ( 9) 00:32:46.560 12.069 - 12.130: 0.8590% ( 59) 00:32:46.560 12.130 - 12.190: 2.6640% ( 145) 00:32:46.560 12.190 - 12.251: 4.3819% ( 138) 00:32:46.560 12.251 - 12.312: 6.0500% ( 134) 00:32:46.560 12.312 - 12.373: 7.3074% ( 101) 00:32:46.560 12.373 - 12.434: 8.3406% ( 83) 00:32:46.560 12.434 - 12.495: 9.5606% ( 98) 00:32:46.560 12.495 - 12.556: 10.9673% ( 113) 00:32:46.560 12.556 - 12.617: 12.2619% ( 104) 00:32:46.560 12.617 - 12.678: 13.4943% ( 99) 00:32:46.560 12.678 - 12.739: 14.3782% ( 71) 00:32:46.560 12.739 - 12.800: 15.1251% ( 60) 00:32:46.561 12.800 - 12.861: 15.6480% ( 42) 00:32:46.561 12.861 - 12.922: 16.1459% ( 40) 00:32:46.561 12.922 - 12.983: 16.4696% ( 26) 00:32:46.561 12.983 - 13.044: 16.9177% ( 36) 00:32:46.561 13.044 - 13.105: 17.9012% ( 79) 00:32:46.561 13.105 - 13.166: 19.1958% ( 104) 00:32:46.561 13.166 - 13.227: 20.5154% ( 106) 00:32:46.561 13.227 - 13.288: 21.9221% ( 113) 00:32:46.561 13.288 - 13.349: 23.2665% ( 108) 00:32:46.561 13.349 - 13.410: 26.5405% ( 263) 00:32:46.561 13.410 - 13.470: 31.5324% ( 401) 00:32:46.561 13.470 - 13.531: 38.4041% ( 552) 00:32:46.561 13.531 - 13.592: 44.8774% ( 520) 00:32:46.561 13.592 - 13.653: 50.0809% ( 418) 00:32:46.561 13.653 - 13.714: 54.1143% ( 324) 00:32:46.561 13.714 - 13.775: 57.2264% ( 250) 00:32:46.561 13.775 - 13.836: 59.9153% ( 216) 00:32:46.561 13.836 - 13.897: 62.1561% ( 180) 00:32:46.561 13.897 - 13.958: 63.7495% ( 128) 00:32:46.561 13.958 - 14.019: 64.9695% ( 98) 00:32:46.561 14.019 - 14.080: 65.5919% ( 50) 00:32:46.561 14.080 - 14.141: 66.3886% ( 64) 00:32:46.561 14.141 - 14.202: 67.6211% ( 99) 00:32:46.561 14.202 - 14.263: 69.5382% ( 154) 00:32:46.561 14.263 - 14.324: 71.3806% ( 148) 00:32:46.561 14.324 - 14.385: 72.8370% ( 117) 00:32:46.561 14.385 - 14.446: 73.7956% ( 77) 00:32:46.561 14.446 - 14.507: 74.8164% ( 82) 00:32:46.561 14.507 - 14.568: 75.4886% ( 54) 00:32:46.561 14.568 - 14.629: 76.1608% ( 54) 00:32:46.561 14.629 - 14.690: 76.6214% ( 37) 00:32:46.561 14.690 - 14.750: 76.8331% ( 17) 00:32:46.561 14.750 - 14.811: 77.1692% ( 27) 00:32:46.561 14.811 - 14.872: 77.4182% ( 20) 00:32:46.561 14.872 - 14.933: 77.7169% ( 24) 00:32:46.561 14.933 - 14.994: 77.8912% ( 14) 00:32:46.561 14.994 - 15.055: 78.1775% ( 23) 00:32:46.561 15.055 - 15.116: 78.3394% ( 13) 00:32:46.561 15.116 - 15.177: 78.4763% ( 11) 00:32:46.561 15.177 - 15.238: 78.5385% ( 5) 00:32:46.561 15.238 - 15.299: 78.5883% ( 4) 00:32:46.561 15.299 - 15.360: 78.6381% ( 4) 00:32:46.561 15.360 - 15.421: 78.7128% ( 6) 00:32:46.561 15.421 - 15.482: 78.7502% ( 3) 00:32:46.561 15.482 - 15.543: 78.7875% ( 3) 00:32:46.561 15.543 - 15.604: 78.8497% ( 5) 00:32:46.561 15.604 - 15.726: 78.8871% ( 3) 00:32:46.561 15.726 - 15.848: 78.9244% ( 3) 00:32:46.561 15.848 - 15.970: 78.9742% ( 4) 00:32:46.561 15.970 - 16.091: 79.0365% ( 5) 00:32:46.561 16.091 - 16.213: 79.0489% ( 1) 00:32:46.561 16.335 - 16.457: 79.0614% ( 1) 00:32:46.561 16.457 - 16.579: 79.0863% ( 2) 00:32:46.561 16.579 - 16.701: 79.1112% ( 2) 00:32:46.561 16.701 - 16.823: 79.1236% ( 1) 00:32:46.561 16.823 - 16.945: 79.1361% ( 1) 00:32:46.561 16.945 - 17.067: 79.1485% ( 1) 00:32:46.561 17.067 - 17.189: 79.1983% ( 4) 00:32:46.561 17.189 - 17.310: 79.2108% ( 1) 00:32:46.561 17.310 - 17.432: 79.2854% ( 6) 00:32:46.561 17.432 - 17.554: 79.2979% ( 1) 00:32:46.561 17.554 - 17.676: 79.3726% ( 6) 00:32:46.561 17.676 - 17.798: 79.4846% ( 9) 00:32:46.561 17.798 - 17.920: 79.6838% ( 16) 00:32:46.561 17.920 - 18.042: 79.8332% ( 12) 00:32:46.561 18.042 - 18.164: 79.9577% ( 10) 00:32:46.561 18.164 - 18.286: 79.9950% ( 3) 00:32:46.561 18.286 - 18.408: 80.0946% ( 8) 00:32:46.561 18.408 - 18.530: 80.2440% ( 12) 00:32:46.561 18.530 - 18.651: 81.1776% ( 75) 00:32:46.561 18.651 - 18.773: 84.5512% ( 271) 00:32:46.561 18.773 - 18.895: 88.6095% ( 326) 00:32:46.561 18.895 - 19.017: 91.5349% ( 235) 00:32:46.561 19.017 - 19.139: 93.3524% ( 146) 00:32:46.561 19.139 - 19.261: 94.3732% ( 82) 00:32:46.561 19.261 - 19.383: 95.0205% ( 52) 00:32:46.561 19.383 - 19.505: 95.4936% ( 38) 00:32:46.561 19.505 - 19.627: 95.8048% ( 25) 00:32:46.561 19.627 - 19.749: 95.9791% ( 14) 00:32:46.561 19.749 - 19.870: 96.1285% ( 12) 00:32:46.561 19.870 - 19.992: 96.3650% ( 19) 00:32:46.561 19.992 - 20.114: 96.5019% ( 11) 00:32:46.561 20.114 - 20.236: 96.6638% ( 13) 00:32:46.561 20.236 - 20.358: 96.8256% ( 13) 00:32:46.561 20.358 - 20.480: 96.9003% ( 6) 00:32:46.561 20.480 - 20.602: 96.9376% ( 3) 00:32:46.561 20.602 - 20.724: 97.0248% ( 7) 00:32:46.561 20.724 - 20.846: 97.0995% ( 6) 00:32:46.561 20.846 - 20.968: 97.1617% ( 5) 00:32:46.561 20.968 - 21.090: 97.2364% ( 6) 00:32:46.561 21.090 - 21.211: 97.3111% ( 6) 00:32:46.561 21.211 - 21.333: 97.3235% ( 1) 00:32:46.561 21.333 - 21.455: 97.4107% ( 7) 00:32:46.561 21.455 - 21.577: 97.4729% ( 5) 00:32:46.561 21.577 - 21.699: 97.4978% ( 2) 00:32:46.561 21.699 - 21.821: 97.5352% ( 3) 00:32:46.561 21.821 - 21.943: 97.5476% ( 1) 00:32:46.561 21.943 - 22.065: 97.5601% ( 1) 00:32:46.561 22.187 - 22.309: 97.5974% ( 3) 00:32:46.561 22.430 - 22.552: 97.6223% ( 2) 00:32:46.561 22.552 - 22.674: 97.6348% ( 1) 00:32:46.561 22.674 - 22.796: 97.6472% ( 1) 00:32:46.561 22.796 - 22.918: 97.6597% ( 1) 00:32:46.561 22.918 - 23.040: 97.6846% ( 2) 00:32:46.561 23.040 - 23.162: 97.7343% ( 4) 00:32:46.561 23.162 - 23.284: 97.7717% ( 3) 00:32:46.561 23.406 - 23.528: 97.7966% ( 2) 00:32:46.561 23.528 - 23.650: 97.8090% ( 1) 00:32:46.561 23.650 - 23.771: 97.8215% ( 1) 00:32:46.561 23.771 - 23.893: 97.8588% ( 3) 00:32:46.561 23.893 - 24.015: 97.8837% ( 2) 00:32:46.561 24.015 - 24.137: 97.9460% ( 5) 00:32:46.561 24.137 - 24.259: 98.0082% ( 5) 00:32:46.561 24.259 - 24.381: 98.0580% ( 4) 00:32:46.561 24.381 - 24.503: 98.0954% ( 3) 00:32:46.561 24.503 - 24.625: 98.1078% ( 1) 00:32:46.561 24.625 - 24.747: 98.1949% ( 7) 00:32:46.561 24.747 - 24.869: 98.2323% ( 3) 00:32:46.561 24.869 - 24.990: 98.3194% ( 7) 00:32:46.561 24.990 - 25.112: 98.3941% ( 6) 00:32:46.561 25.112 - 25.234: 98.4937% ( 8) 00:32:46.561 25.234 - 25.356: 98.6680% ( 14) 00:32:46.561 25.356 - 25.478: 98.7800% ( 9) 00:32:46.561 25.478 - 25.600: 98.8298% ( 4) 00:32:46.561 25.600 - 25.722: 98.8796% ( 4) 00:32:46.561 25.722 - 25.844: 98.9668% ( 7) 00:32:46.561 25.844 - 25.966: 99.0415% ( 6) 00:32:46.561 25.966 - 26.088: 99.1161% ( 6) 00:32:46.561 26.088 - 26.210: 99.1410% ( 2) 00:32:46.561 26.210 - 26.331: 99.1784% ( 3) 00:32:46.561 26.331 - 26.453: 99.2282% ( 4) 00:32:46.561 26.453 - 26.575: 99.2406% ( 1) 00:32:46.561 26.697 - 26.819: 99.2531% ( 1) 00:32:46.561 26.819 - 26.941: 99.2655% ( 1) 00:32:46.561 27.185 - 27.307: 99.2904% ( 2) 00:32:46.561 27.429 - 27.550: 99.3029% ( 1) 00:32:46.561 27.550 - 27.672: 99.3153% ( 1) 00:32:46.561 27.672 - 27.794: 99.3278% ( 1) 00:32:46.561 27.794 - 27.916: 99.3527% ( 2) 00:32:46.561 27.916 - 28.038: 99.4025% ( 4) 00:32:46.561 28.038 - 28.160: 99.4149% ( 1) 00:32:46.561 28.160 - 28.282: 99.4523% ( 3) 00:32:46.561 28.282 - 28.404: 99.4647% ( 1) 00:32:46.561 28.404 - 28.526: 99.4896% ( 2) 00:32:46.561 28.526 - 28.648: 99.5021% ( 1) 00:32:46.561 28.648 - 28.770: 99.5145% ( 1) 00:32:46.561 28.891 - 29.013: 99.5394% ( 2) 00:32:46.561 29.135 - 29.257: 99.5643% ( 2) 00:32:46.561 29.257 - 29.379: 99.5767% ( 1) 00:32:46.561 29.379 - 29.501: 99.6016% ( 2) 00:32:46.561 29.501 - 29.623: 99.6390% ( 3) 00:32:46.561 29.867 - 29.989: 99.6514% ( 1) 00:32:46.561 29.989 - 30.110: 99.6763% ( 2) 00:32:46.561 30.110 - 30.232: 99.6888% ( 1) 00:32:46.561 30.232 - 30.354: 99.7137% ( 2) 00:32:46.561 30.354 - 30.476: 99.7261% ( 1) 00:32:46.561 30.842 - 30.964: 99.7386% ( 1) 00:32:46.561 31.086 - 31.208: 99.7510% ( 1) 00:32:46.561 31.451 - 31.695: 99.7635% ( 1) 00:32:46.561 31.695 - 31.939: 99.7759% ( 1) 00:32:46.561 31.939 - 32.183: 99.7884% ( 1) 00:32:46.561 32.914 - 33.158: 99.8008% ( 1) 00:32:46.561 33.158 - 33.402: 99.8133% ( 1) 00:32:46.561 33.890 - 34.133: 99.8506% ( 3) 00:32:46.561 34.621 - 34.865: 99.8631% ( 1) 00:32:46.561 34.865 - 35.109: 99.8755% ( 1) 00:32:46.561 35.109 - 35.352: 99.8880% ( 1) 00:32:46.561 36.815 - 37.059: 99.9004% ( 1) 00:32:46.561 38.766 - 39.010: 99.9129% ( 1) 00:32:46.561 39.010 - 39.253: 99.9253% ( 1) 00:32:46.561 40.229 - 40.472: 99.9378% ( 1) 00:32:46.561 41.448 - 41.691: 99.9502% ( 1) 00:32:46.561 98.011 - 98.499: 99.9627% ( 1) 00:32:46.561 102.888 - 103.375: 99.9751% ( 1) 00:32:46.561 111.665 - 112.152: 99.9876% ( 1) 00:32:46.561 117.516 - 118.004: 100.0000% ( 1) 00:32:46.561 00:32:46.561 Complete histogram 00:32:46.561 ================== 00:32:46.561 Range in us Cumulative Count 00:32:46.561 7.863 - 7.924: 0.0622% ( 5) 00:32:46.561 7.924 - 7.985: 0.5353% ( 38) 00:32:46.561 7.985 - 8.046: 2.1163% ( 127) 00:32:46.561 8.046 - 8.107: 3.4732% ( 109) 00:32:46.561 8.107 - 8.168: 4.3321% ( 69) 00:32:46.562 8.168 - 8.229: 4.9546% ( 50) 00:32:46.562 8.229 - 8.290: 5.4401% ( 39) 00:32:46.562 8.290 - 8.350: 6.2243% ( 63) 00:32:46.562 8.350 - 8.411: 8.1290% ( 153) 00:32:46.562 8.411 - 8.472: 10.0461% ( 154) 00:32:46.562 8.472 - 8.533: 11.5772% ( 123) 00:32:46.562 8.533 - 8.594: 12.6976% ( 90) 00:32:46.562 8.594 - 8.655: 14.1043% ( 113) 00:32:46.562 8.655 - 8.716: 21.1129% ( 563) 00:32:46.562 8.716 - 8.777: 35.1674% ( 1129) 00:32:46.562 8.777 - 8.838: 43.7943% ( 693) 00:32:46.562 8.838 - 8.899: 48.0891% ( 345) 00:32:46.562 8.899 - 8.960: 50.7282% ( 212) 00:32:46.562 8.960 - 9.021: 53.6537% ( 235) 00:32:46.562 9.021 - 9.082: 56.0936% ( 196) 00:32:46.562 9.082 - 9.143: 58.2721% ( 175) 00:32:46.562 9.143 - 9.204: 59.7535% ( 119) 00:32:46.562 9.204 - 9.265: 62.1188% ( 190) 00:32:46.562 9.265 - 9.326: 66.0401% ( 315) 00:32:46.562 9.326 - 9.387: 69.3763% ( 268) 00:32:46.562 9.387 - 9.448: 71.4303% ( 165) 00:32:46.562 9.448 - 9.509: 72.6130% ( 95) 00:32:46.562 9.509 - 9.570: 73.8952% ( 103) 00:32:46.562 9.570 - 9.630: 75.2770% ( 111) 00:32:46.562 9.630 - 9.691: 76.4596% ( 95) 00:32:46.562 9.691 - 9.752: 77.3684% ( 73) 00:32:46.562 9.752 - 9.813: 78.0406% ( 54) 00:32:46.562 9.813 - 9.874: 78.6381% ( 48) 00:32:46.562 9.874 - 9.935: 79.0116% ( 30) 00:32:46.562 9.935 - 9.996: 79.3726% ( 29) 00:32:46.562 9.996 - 10.057: 79.7087% ( 27) 00:32:46.562 10.057 - 10.118: 80.0199% ( 25) 00:32:46.562 10.118 - 10.179: 80.1942% ( 14) 00:32:46.562 10.179 - 10.240: 80.4307% ( 19) 00:32:46.562 10.240 - 10.301: 80.5552% ( 10) 00:32:46.562 10.301 - 10.362: 80.6672% ( 9) 00:32:46.562 10.362 - 10.423: 80.7419% ( 6) 00:32:46.562 10.423 - 10.484: 80.7793% ( 3) 00:32:46.562 10.484 - 10.545: 80.8166% ( 3) 00:32:46.562 10.545 - 10.606: 80.9038% ( 7) 00:32:46.562 10.606 - 10.667: 80.9536% ( 4) 00:32:46.562 10.667 - 10.728: 80.9660% ( 1) 00:32:46.562 10.728 - 10.789: 80.9909% ( 2) 00:32:46.562 10.789 - 10.850: 81.0034% ( 1) 00:32:46.562 10.910 - 10.971: 81.0158% ( 1) 00:32:46.562 10.971 - 11.032: 81.0407% ( 2) 00:32:46.562 11.032 - 11.093: 81.0656% ( 2) 00:32:46.562 11.093 - 11.154: 81.0905% ( 2) 00:32:46.562 11.154 - 11.215: 81.1154% ( 2) 00:32:46.562 11.215 - 11.276: 81.1278% ( 1) 00:32:46.562 11.276 - 11.337: 81.1403% ( 1) 00:32:46.562 11.459 - 11.520: 81.1527% ( 1) 00:32:46.562 11.642 - 11.703: 81.1652% ( 1) 00:32:46.562 11.703 - 11.764: 81.2025% ( 3) 00:32:46.562 11.764 - 11.825: 81.2150% ( 1) 00:32:46.562 11.825 - 11.886: 81.2274% ( 1) 00:32:46.562 12.008 - 12.069: 81.2399% ( 1) 00:32:46.562 12.130 - 12.190: 81.2523% ( 1) 00:32:46.562 12.251 - 12.312: 81.2772% ( 2) 00:32:46.562 12.373 - 12.434: 81.2897% ( 1) 00:32:46.562 12.434 - 12.495: 81.3021% ( 1) 00:32:46.562 12.495 - 12.556: 81.4640% ( 13) 00:32:46.562 12.556 - 12.617: 83.3562% ( 152) 00:32:46.562 12.617 - 12.678: 87.2277% ( 311) 00:32:46.562 12.678 - 12.739: 90.6262% ( 273) 00:32:46.562 12.739 - 12.800: 92.2694% ( 132) 00:32:46.562 12.800 - 12.861: 92.9292% ( 53) 00:32:46.562 12.861 - 12.922: 93.4396% ( 41) 00:32:46.562 12.922 - 12.983: 93.9500% ( 41) 00:32:46.562 12.983 - 13.044: 94.6222% ( 54) 00:32:46.562 13.044 - 13.105: 95.1948% ( 46) 00:32:46.562 13.105 - 13.166: 95.5932% ( 32) 00:32:46.562 13.166 - 13.227: 95.9293% ( 27) 00:32:46.562 13.227 - 13.288: 96.1409% ( 17) 00:32:46.562 13.288 - 13.349: 96.3152% ( 14) 00:32:46.562 13.349 - 13.410: 96.3774% ( 5) 00:32:46.562 13.410 - 13.470: 96.4148% ( 3) 00:32:46.562 13.470 - 13.531: 96.4521% ( 3) 00:32:46.562 13.531 - 13.592: 96.5766% ( 10) 00:32:46.562 13.592 - 13.653: 96.6513% ( 6) 00:32:46.562 13.653 - 13.714: 96.6762% ( 2) 00:32:46.562 13.714 - 13.775: 96.7136% ( 3) 00:32:46.562 13.775 - 13.836: 96.7509% ( 3) 00:32:46.562 13.836 - 13.897: 96.8131% ( 5) 00:32:46.562 13.897 - 13.958: 96.8629% ( 4) 00:32:46.562 13.958 - 14.019: 96.8754% ( 1) 00:32:46.562 14.019 - 14.080: 96.9003% ( 2) 00:32:46.562 14.080 - 14.141: 96.9625% ( 5) 00:32:46.562 14.141 - 14.202: 96.9874% ( 2) 00:32:46.562 14.202 - 14.263: 97.0372% ( 4) 00:32:46.562 14.263 - 14.324: 97.0621% ( 2) 00:32:46.562 14.385 - 14.446: 97.1244% ( 5) 00:32:46.562 14.446 - 14.507: 97.1991% ( 6) 00:32:46.562 14.507 - 14.568: 97.2115% ( 1) 00:32:46.562 14.568 - 14.629: 97.2364% ( 2) 00:32:46.562 14.629 - 14.690: 97.2862% ( 4) 00:32:46.562 14.690 - 14.750: 97.3111% ( 2) 00:32:46.562 14.750 - 14.811: 97.3982% ( 7) 00:32:46.562 14.811 - 14.872: 97.4605% ( 5) 00:32:46.562 14.872 - 14.933: 97.4978% ( 3) 00:32:46.562 14.933 - 14.994: 97.5352% ( 3) 00:32:46.562 14.994 - 15.055: 97.5476% ( 1) 00:32:46.562 15.055 - 15.116: 97.5725% ( 2) 00:32:46.562 15.116 - 15.177: 97.6348% ( 5) 00:32:46.562 15.177 - 15.238: 97.6721% ( 3) 00:32:46.562 15.299 - 15.360: 97.7094% ( 3) 00:32:46.562 15.360 - 15.421: 97.7592% ( 4) 00:32:46.562 15.421 - 15.482: 97.7841% ( 2) 00:32:46.562 15.482 - 15.543: 97.8339% ( 4) 00:32:46.562 15.543 - 15.604: 97.8464% ( 1) 00:32:46.562 15.604 - 15.726: 97.8713% ( 2) 00:32:46.562 15.726 - 15.848: 97.9086% ( 3) 00:32:46.562 15.848 - 15.970: 97.9709% ( 5) 00:32:46.562 15.970 - 16.091: 98.0331% ( 5) 00:32:46.562 16.091 - 16.213: 98.0705% ( 3) 00:32:46.562 16.213 - 16.335: 98.0954% ( 2) 00:32:46.562 16.335 - 16.457: 98.1078% ( 1) 00:32:46.562 16.457 - 16.579: 98.1327% ( 2) 00:32:46.562 16.579 - 16.701: 98.1949% ( 5) 00:32:46.562 16.823 - 16.945: 98.2572% ( 5) 00:32:46.562 16.945 - 17.067: 98.2821% ( 2) 00:32:46.562 17.067 - 17.189: 98.2945% ( 1) 00:32:46.562 17.189 - 17.310: 98.3319% ( 3) 00:32:46.562 17.310 - 17.432: 98.3568% ( 2) 00:32:46.562 17.432 - 17.554: 98.3692% ( 1) 00:32:46.562 17.798 - 17.920: 98.3941% ( 2) 00:32:46.562 17.920 - 18.042: 98.4066% ( 1) 00:32:46.562 18.042 - 18.164: 98.4190% ( 1) 00:32:46.562 18.164 - 18.286: 98.4315% ( 1) 00:32:46.562 18.286 - 18.408: 98.4439% ( 1) 00:32:46.562 18.408 - 18.530: 98.4564% ( 1) 00:32:46.562 18.895 - 19.017: 98.4813% ( 2) 00:32:46.562 19.017 - 19.139: 98.5062% ( 2) 00:32:46.562 19.627 - 19.749: 98.5186% ( 1) 00:32:46.562 19.992 - 20.114: 98.5684% ( 4) 00:32:46.562 20.114 - 20.236: 98.5933% ( 2) 00:32:46.562 20.236 - 20.358: 98.6555% ( 5) 00:32:46.562 20.358 - 20.480: 98.7302% ( 6) 00:32:46.562 20.480 - 20.602: 98.8298% ( 8) 00:32:46.562 20.602 - 20.724: 98.8921% ( 5) 00:32:46.562 20.724 - 20.846: 98.9419% ( 4) 00:32:46.562 20.846 - 20.968: 98.9668% ( 2) 00:32:46.562 20.968 - 21.090: 98.9917% ( 2) 00:32:46.562 21.090 - 21.211: 99.1037% ( 9) 00:32:46.562 21.211 - 21.333: 99.1535% ( 4) 00:32:46.562 21.333 - 21.455: 99.1784% ( 2) 00:32:46.562 21.455 - 21.577: 99.2531% ( 6) 00:32:46.562 21.577 - 21.699: 99.2655% ( 1) 00:32:46.562 21.699 - 21.821: 99.2904% ( 2) 00:32:46.562 21.821 - 21.943: 99.3029% ( 1) 00:32:46.562 21.943 - 22.065: 99.3278% ( 2) 00:32:46.562 22.065 - 22.187: 99.3900% ( 5) 00:32:46.562 22.187 - 22.309: 99.4274% ( 3) 00:32:46.562 22.309 - 22.430: 99.4523% ( 2) 00:32:46.562 22.430 - 22.552: 99.4772% ( 2) 00:32:46.562 22.552 - 22.674: 99.4896% ( 1) 00:32:46.562 22.796 - 22.918: 99.5145% ( 2) 00:32:46.562 22.918 - 23.040: 99.5270% ( 1) 00:32:46.562 23.040 - 23.162: 99.5394% ( 1) 00:32:46.562 23.406 - 23.528: 99.5518% ( 1) 00:32:46.562 23.650 - 23.771: 99.5643% ( 1) 00:32:46.562 23.893 - 24.015: 99.5767% ( 1) 00:32:46.562 24.015 - 24.137: 99.5892% ( 1) 00:32:46.562 24.747 - 24.869: 99.6016% ( 1) 00:32:46.562 25.112 - 25.234: 99.6141% ( 1) 00:32:46.562 25.234 - 25.356: 99.6390% ( 2) 00:32:46.562 25.478 - 25.600: 99.6763% ( 3) 00:32:46.562 25.600 - 25.722: 99.7137% ( 3) 00:32:46.562 25.722 - 25.844: 99.7386% ( 2) 00:32:46.562 25.844 - 25.966: 99.7510% ( 1) 00:32:46.562 26.088 - 26.210: 99.7635% ( 1) 00:32:46.562 26.697 - 26.819: 99.7759% ( 1) 00:32:46.562 26.819 - 26.941: 99.7884% ( 1) 00:32:46.562 26.941 - 27.063: 99.8008% ( 1) 00:32:46.562 27.063 - 27.185: 99.8133% ( 1) 00:32:46.562 27.307 - 27.429: 99.8382% ( 2) 00:32:46.562 27.916 - 28.038: 99.8506% ( 1) 00:32:46.562 28.526 - 28.648: 99.8631% ( 1) 00:32:46.562 29.867 - 29.989: 99.8755% ( 1) 00:32:46.562 31.086 - 31.208: 99.8880% ( 1) 00:32:46.562 31.451 - 31.695: 99.9004% ( 1) 00:32:46.562 31.695 - 31.939: 99.9129% ( 1) 00:32:46.562 32.670 - 32.914: 99.9253% ( 1) 00:32:46.563 35.109 - 35.352: 99.9378% ( 1) 00:32:46.563 41.448 - 41.691: 99.9502% ( 1) 00:32:46.563 41.691 - 41.935: 99.9627% ( 1) 00:32:46.563 60.709 - 60.952: 99.9751% ( 1) 00:32:46.563 204.800 - 205.775: 99.9876% ( 1) 00:32:46.563 1950.476 - 1958.278: 100.0000% ( 1) 00:32:46.563 00:32:46.563 00:32:46.563 real 0m1.369s 00:32:46.563 user 0m1.127s 00:32:46.563 sys 0m0.193s 00:32:46.563 13:53:43 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:46.563 13:53:43 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:32:46.563 ************************************ 00:32:46.563 END TEST nvme_overhead 00:32:46.563 ************************************ 00:32:46.563 13:53:43 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:32:46.563 13:53:43 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:32:46.563 13:53:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:46.563 13:53:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:46.563 ************************************ 00:32:46.563 START TEST nvme_arbitration 00:32:46.563 ************************************ 00:32:46.563 13:53:43 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:32:50.747 Initializing NVMe Controllers 00:32:50.748 Attached to 0000:00:10.0 00:32:50.748 Attached to 0000:00:11.0 00:32:50.748 Attached to 0000:00:13.0 00:32:50.748 Attached to 0000:00:12.0 00:32:50.748 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:32:50.748 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:32:50.748 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:32:50.748 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:32:50.748 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:32:50.748 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:32:50.748 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:32:50.748 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:32:50.748 Initialization complete. Launching workers. 00:32:50.748 Starting thread on core 1 with urgent priority queue 00:32:50.748 Starting thread on core 2 with urgent priority queue 00:32:50.748 Starting thread on core 3 with urgent priority queue 00:32:50.748 Starting thread on core 0 with urgent priority queue 00:32:50.748 QEMU NVMe Ctrl (12340 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:32:50.748 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:32:50.748 QEMU NVMe Ctrl (12341 ) core 1: 490.67 IO/s 203.80 secs/100000 ios 00:32:50.748 QEMU NVMe Ctrl (12342 ) core 1: 490.67 IO/s 203.80 secs/100000 ios 00:32:50.748 QEMU NVMe Ctrl (12343 ) core 2: 533.33 IO/s 187.50 secs/100000 ios 00:32:50.748 QEMU NVMe Ctrl (12342 ) core 3: 512.00 IO/s 195.31 secs/100000 ios 00:32:50.748 ======================================================== 00:32:50.748 00:32:50.748 ************************************ 00:32:50.748 END TEST nvme_arbitration 00:32:50.748 ************************************ 00:32:50.748 00:32:50.748 real 0m3.522s 00:32:50.748 user 0m9.524s 00:32:50.748 sys 0m0.212s 00:32:50.748 13:53:47 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.748 13:53:47 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:32:50.748 13:53:47 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:32:50.748 13:53:47 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:50.748 13:53:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.748 13:53:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:50.748 ************************************ 00:32:50.748 START TEST nvme_single_aen 00:32:50.748 ************************************ 00:32:50.748 13:53:47 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:32:50.748 Asynchronous Event Request test 00:32:50.748 Attached to 0000:00:10.0 00:32:50.748 Attached to 0000:00:11.0 00:32:50.748 Attached to 0000:00:13.0 00:32:50.748 Attached to 0000:00:12.0 00:32:50.748 Reset controller to setup AER completions for this process 00:32:50.748 Registering asynchronous event callbacks... 00:32:50.748 Getting orig temperature thresholds of all controllers 00:32:50.748 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:50.748 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:50.748 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:50.748 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:50.748 Setting all controllers temperature threshold low to trigger AER 00:32:50.748 Waiting for all controllers temperature threshold to be set lower 00:32:50.748 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:50.748 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:32:50.748 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:50.748 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:32:50.748 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:50.748 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:32:50.748 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:50.748 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:32:50.748 Waiting for all controllers to trigger AER and reset threshold 00:32:50.748 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:50.748 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:50.748 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:50.748 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:50.748 Cleaning up... 00:32:50.748 ************************************ 00:32:50.748 END TEST nvme_single_aen 00:32:50.748 ************************************ 00:32:50.748 00:32:50.748 real 0m0.289s 00:32:50.748 user 0m0.093s 00:32:50.748 sys 0m0.153s 00:32:50.748 13:53:47 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.748 13:53:47 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:32:50.748 13:53:47 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:32:50.748 13:53:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:50.748 13:53:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.748 13:53:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:50.748 ************************************ 00:32:50.748 START TEST nvme_doorbell_aers 00:32:50.748 ************************************ 00:32:50.748 13:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:32:50.748 13:53:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:32:50.748 13:53:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:32:50.748 13:53:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:32:50.748 13:53:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:32:50.748 13:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:50.748 13:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:32:50.748 13:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:50.748 13:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:50.748 13:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:50.748 13:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:32:50.748 13:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:50.748 13:53:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:32:50.748 13:53:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:32:50.748 [2024-11-20 13:53:48.058088] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:00.727 Executing: test_write_invalid_db 00:33:00.727 Waiting for AER completion... 00:33:00.727 Failure: test_write_invalid_db 00:33:00.727 00:33:00.727 Executing: test_invalid_db_write_overflow_sq 00:33:00.727 Waiting for AER completion... 00:33:00.727 Failure: test_invalid_db_write_overflow_sq 00:33:00.727 00:33:00.727 Executing: test_invalid_db_write_overflow_cq 00:33:00.727 Waiting for AER completion... 00:33:00.727 Failure: test_invalid_db_write_overflow_cq 00:33:00.727 00:33:00.727 13:53:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:33:00.727 13:53:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:33:00.986 [2024-11-20 13:53:58.062537] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:10.961 Executing: test_write_invalid_db 00:33:10.961 Waiting for AER completion... 00:33:10.961 Failure: test_write_invalid_db 00:33:10.961 00:33:10.961 Executing: test_invalid_db_write_overflow_sq 00:33:10.961 Waiting for AER completion... 00:33:10.961 Failure: test_invalid_db_write_overflow_sq 00:33:10.961 00:33:10.961 Executing: test_invalid_db_write_overflow_cq 00:33:10.961 Waiting for AER completion... 00:33:10.961 Failure: test_invalid_db_write_overflow_cq 00:33:10.961 00:33:10.961 13:54:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:33:10.961 13:54:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:33:10.961 [2024-11-20 13:54:08.180897] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:20.941 Executing: test_write_invalid_db 00:33:20.941 Waiting for AER completion... 00:33:20.941 Failure: test_write_invalid_db 00:33:20.941 00:33:20.941 Executing: test_invalid_db_write_overflow_sq 00:33:20.941 Waiting for AER completion... 00:33:20.941 Failure: test_invalid_db_write_overflow_sq 00:33:20.941 00:33:20.941 Executing: test_invalid_db_write_overflow_cq 00:33:20.941 Waiting for AER completion... 00:33:20.941 Failure: test_invalid_db_write_overflow_cq 00:33:20.941 00:33:20.941 13:54:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:33:20.941 13:54:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:33:20.941 [2024-11-20 13:54:18.230858] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:30.920 Executing: test_write_invalid_db 00:33:30.920 Waiting for AER completion... 00:33:30.920 Failure: test_write_invalid_db 00:33:30.920 00:33:30.920 Executing: test_invalid_db_write_overflow_sq 00:33:30.920 Waiting for AER completion... 00:33:30.920 Failure: test_invalid_db_write_overflow_sq 00:33:30.920 00:33:30.920 Executing: test_invalid_db_write_overflow_cq 00:33:30.920 Waiting for AER completion... 00:33:30.920 Failure: test_invalid_db_write_overflow_cq 00:33:30.920 00:33:30.920 00:33:30.920 real 0m40.291s 00:33:30.920 user 0m28.368s 00:33:30.920 sys 0m11.531s 00:33:30.920 13:54:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.920 13:54:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:33:30.920 ************************************ 00:33:30.920 END TEST nvme_doorbell_aers 00:33:30.920 ************************************ 00:33:30.920 13:54:27 nvme -- nvme/nvme.sh@97 -- # uname 00:33:30.920 13:54:27 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:33:30.920 13:54:27 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:33:30.920 13:54:27 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:33:30.920 13:54:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.920 13:54:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:33:30.920 ************************************ 00:33:30.920 START TEST nvme_multi_aen 00:33:30.920 ************************************ 00:33:30.920 13:54:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:33:31.179 [2024-11-20 13:54:28.263308] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:31.179 [2024-11-20 13:54:28.263657] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:31.179 [2024-11-20 13:54:28.263684] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:31.179 [2024-11-20 13:54:28.265537] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:31.179 [2024-11-20 13:54:28.265586] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:31.180 [2024-11-20 13:54:28.265607] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:31.180 [2024-11-20 13:54:28.267137] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:31.180 [2024-11-20 13:54:28.267314] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:31.180 [2024-11-20 13:54:28.267337] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:31.180 [2024-11-20 13:54:28.268803] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:31.180 [2024-11-20 13:54:28.268843] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:31.180 [2024-11-20 13:54:28.268860] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64955) is not found. Dropping the request. 00:33:31.180 Child process pid: 65476 00:33:31.439 [Child] Asynchronous Event Request test 00:33:31.439 [Child] Attached to 0000:00:10.0 00:33:31.439 [Child] Attached to 0000:00:11.0 00:33:31.439 [Child] Attached to 0000:00:13.0 00:33:31.439 [Child] Attached to 0000:00:12.0 00:33:31.439 [Child] Registering asynchronous event callbacks... 00:33:31.439 [Child] Getting orig temperature thresholds of all controllers 00:33:31.439 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:33:31.439 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:33:31.439 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:33:31.439 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:33:31.439 [Child] Waiting for all controllers to trigger AER and reset threshold 00:33:31.439 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:33:31.439 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:33:31.439 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:33:31.439 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:33:31.439 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:33:31.439 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:33:31.439 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:33:31.439 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:33:31.439 [Child] Cleaning up... 00:33:31.439 Asynchronous Event Request test 00:33:31.439 Attached to 0000:00:10.0 00:33:31.439 Attached to 0000:00:11.0 00:33:31.439 Attached to 0000:00:13.0 00:33:31.439 Attached to 0000:00:12.0 00:33:31.439 Reset controller to setup AER completions for this process 00:33:31.439 Registering asynchronous event callbacks... 00:33:31.439 Getting orig temperature thresholds of all controllers 00:33:31.439 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:33:31.439 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:33:31.439 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:33:31.440 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:33:31.440 Setting all controllers temperature threshold low to trigger AER 00:33:31.440 Waiting for all controllers temperature threshold to be set lower 00:33:31.440 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:33:31.440 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:33:31.440 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:33:31.440 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:33:31.440 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:33:31.440 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:33:31.440 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:33:31.440 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:33:31.440 Waiting for all controllers to trigger AER and reset threshold 00:33:31.440 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:33:31.440 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:33:31.440 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:33:31.440 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:33:31.440 Cleaning up... 00:33:31.440 00:33:31.440 real 0m0.638s 00:33:31.440 user 0m0.207s 00:33:31.440 sys 0m0.324s 00:33:31.440 13:54:28 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:31.440 13:54:28 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:33:31.440 ************************************ 00:33:31.440 END TEST nvme_multi_aen 00:33:31.440 ************************************ 00:33:31.440 13:54:28 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:33:31.440 13:54:28 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:31.440 13:54:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:31.440 13:54:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:33:31.440 ************************************ 00:33:31.440 START TEST nvme_startup 00:33:31.440 ************************************ 00:33:31.440 13:54:28 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:33:32.007 Initializing NVMe Controllers 00:33:32.007 Attached to 0000:00:10.0 00:33:32.007 Attached to 0000:00:11.0 00:33:32.007 Attached to 0000:00:13.0 00:33:32.007 Attached to 0000:00:12.0 00:33:32.007 Initialization complete. 00:33:32.007 Time used:253516.094 (us). 00:33:32.007 ************************************ 00:33:32.007 END TEST nvme_startup 00:33:32.007 ************************************ 00:33:32.007 00:33:32.007 real 0m0.368s 00:33:32.007 user 0m0.142s 00:33:32.007 sys 0m0.178s 00:33:32.007 13:54:29 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:32.007 13:54:29 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:33:32.007 13:54:29 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:33:32.007 13:54:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:32.007 13:54:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:32.007 13:54:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:33:32.007 ************************************ 00:33:32.007 START TEST nvme_multi_secondary 00:33:32.007 ************************************ 00:33:32.007 13:54:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:33:32.007 13:54:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65531 00:33:32.007 13:54:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:33:32.007 13:54:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65532 00:33:32.007 13:54:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:33:32.007 13:54:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:33:35.290 Initializing NVMe Controllers 00:33:35.290 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:33:35.290 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:33:35.290 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:33:35.290 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:33:35.290 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:33:35.290 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:33:35.290 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:33:35.290 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:33:35.290 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:33:35.290 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:33:35.290 Initialization complete. Launching workers. 00:33:35.290 ======================================================== 00:33:35.290 Latency(us) 00:33:35.290 Device Information : IOPS MiB/s Average min max 00:33:35.290 PCIE (0000:00:10.0) NSID 1 from core 1: 5455.82 21.31 2930.75 1046.53 7485.30 00:33:35.290 PCIE (0000:00:11.0) NSID 1 from core 1: 5455.82 21.31 2932.23 1086.92 7230.70 00:33:35.290 PCIE (0000:00:13.0) NSID 1 from core 1: 5455.82 21.31 2932.15 1077.88 8365.06 00:33:35.290 PCIE (0000:00:12.0) NSID 1 from core 1: 5455.82 21.31 2932.10 1081.01 7347.08 00:33:35.290 PCIE (0000:00:12.0) NSID 2 from core 1: 5455.82 21.31 2932.04 1082.66 6556.88 00:33:35.290 PCIE (0000:00:12.0) NSID 3 from core 1: 5455.82 21.31 2931.97 1083.19 7058.82 00:33:35.290 ======================================================== 00:33:35.290 Total : 32734.93 127.87 2931.87 1046.53 8365.06 00:33:35.290 00:33:35.547 Initializing NVMe Controllers 00:33:35.547 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:33:35.547 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:33:35.547 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:33:35.547 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:33:35.547 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:33:35.547 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:33:35.547 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:33:35.547 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:33:35.547 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:33:35.547 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:33:35.547 Initialization complete. Launching workers. 00:33:35.547 ======================================================== 00:33:35.547 Latency(us) 00:33:35.547 Device Information : IOPS MiB/s Average min max 00:33:35.547 PCIE (0000:00:10.0) NSID 1 from core 2: 2427.24 9.48 6581.89 1891.93 17598.90 00:33:35.547 PCIE (0000:00:11.0) NSID 1 from core 2: 2427.24 9.48 6582.32 1905.30 17486.84 00:33:35.547 PCIE (0000:00:13.0) NSID 1 from core 2: 2427.24 9.48 6582.70 1809.22 14132.59 00:33:35.547 PCIE (0000:00:12.0) NSID 1 from core 2: 2427.24 9.48 6591.72 1799.07 15054.28 00:33:35.547 PCIE (0000:00:12.0) NSID 2 from core 2: 2427.24 9.48 6592.31 1870.93 16094.52 00:33:35.547 PCIE (0000:00:12.0) NSID 3 from core 2: 2427.24 9.48 6592.15 1816.91 16468.22 00:33:35.547 ======================================================== 00:33:35.547 Total : 14563.45 56.89 6587.18 1799.07 17598.90 00:33:35.547 00:33:35.805 13:54:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65531 00:33:37.706 Initializing NVMe Controllers 00:33:37.706 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:33:37.706 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:33:37.706 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:33:37.706 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:33:37.706 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:33:37.706 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:33:37.706 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:33:37.706 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:33:37.706 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:33:37.706 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:33:37.706 Initialization complete. Launching workers. 00:33:37.706 ======================================================== 00:33:37.706 Latency(us) 00:33:37.706 Device Information : IOPS MiB/s Average min max 00:33:37.706 PCIE (0000:00:10.0) NSID 1 from core 0: 7634.25 29.82 2094.27 968.84 8075.82 00:33:37.706 PCIE (0000:00:11.0) NSID 1 from core 0: 7634.25 29.82 2095.32 984.95 7988.95 00:33:37.706 PCIE (0000:00:13.0) NSID 1 from core 0: 7634.05 29.82 2095.29 982.55 9066.03 00:33:37.706 PCIE (0000:00:12.0) NSID 1 from core 0: 7634.25 29.82 2095.15 981.04 8897.72 00:33:37.706 PCIE (0000:00:12.0) NSID 2 from core 0: 7634.25 29.82 2095.06 988.11 7858.25 00:33:37.706 PCIE (0000:00:12.0) NSID 3 from core 0: 7634.25 29.82 2094.98 900.21 8194.73 00:33:37.706 ======================================================== 00:33:37.706 Total : 45805.30 178.93 2095.01 900.21 9066.03 00:33:37.706 00:33:37.706 13:54:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65532 00:33:37.706 13:54:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:33:37.706 13:54:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65607 00:33:37.706 13:54:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:33:37.706 13:54:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65608 00:33:37.706 13:54:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:33:41.051 Initializing NVMe Controllers 00:33:41.051 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:33:41.051 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:33:41.051 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:33:41.051 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:33:41.051 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:33:41.051 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:33:41.051 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:33:41.051 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:33:41.051 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:33:41.051 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:33:41.051 Initialization complete. Launching workers. 00:33:41.051 ======================================================== 00:33:41.051 Latency(us) 00:33:41.051 Device Information : IOPS MiB/s Average min max 00:33:41.051 PCIE (0000:00:10.0) NSID 1 from core 1: 5544.46 21.66 2884.13 1163.70 6437.73 00:33:41.051 PCIE (0000:00:11.0) NSID 1 from core 1: 5544.46 21.66 2885.29 1170.51 6373.74 00:33:41.051 PCIE (0000:00:13.0) NSID 1 from core 1: 5544.46 21.66 2885.53 1187.88 6501.32 00:33:41.051 PCIE (0000:00:12.0) NSID 1 from core 1: 5544.46 21.66 2885.56 1188.97 6489.19 00:33:41.051 PCIE (0000:00:12.0) NSID 2 from core 1: 5544.46 21.66 2885.96 1191.80 6262.55 00:33:41.051 PCIE (0000:00:12.0) NSID 3 from core 1: 5544.46 21.66 2886.12 1193.23 6486.13 00:33:41.051 ======================================================== 00:33:41.051 Total : 33266.78 129.95 2885.43 1163.70 6501.32 00:33:41.051 00:33:41.051 Initializing NVMe Controllers 00:33:41.051 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:33:41.051 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:33:41.051 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:33:41.051 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:33:41.051 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:33:41.051 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:33:41.051 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:33:41.051 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:33:41.051 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:33:41.051 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:33:41.051 Initialization complete. Launching workers. 00:33:41.051 ======================================================== 00:33:41.051 Latency(us) 00:33:41.051 Device Information : IOPS MiB/s Average min max 00:33:41.051 PCIE (0000:00:10.0) NSID 1 from core 0: 5433.35 21.22 2942.91 1015.84 7965.34 00:33:41.051 PCIE (0000:00:11.0) NSID 1 from core 0: 5433.35 21.22 2944.21 1042.34 8907.65 00:33:41.051 PCIE (0000:00:13.0) NSID 1 from core 0: 5433.35 21.22 2944.17 1032.64 9332.66 00:33:41.051 PCIE (0000:00:12.0) NSID 1 from core 0: 5433.35 21.22 2944.18 1078.46 8216.23 00:33:41.051 PCIE (0000:00:12.0) NSID 2 from core 0: 5433.35 21.22 2944.17 1059.84 8052.64 00:33:41.051 PCIE (0000:00:12.0) NSID 3 from core 0: 5433.35 21.22 2944.12 1043.93 7741.45 00:33:41.051 ======================================================== 00:33:41.051 Total : 32600.11 127.34 2943.96 1015.84 9332.66 00:33:41.051 00:33:42.972 Initializing NVMe Controllers 00:33:42.972 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:33:42.972 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:33:42.972 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:33:42.972 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:33:42.972 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:33:42.972 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:33:42.972 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:33:42.972 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:33:42.972 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:33:42.972 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:33:42.972 Initialization complete. Launching workers. 00:33:42.972 ======================================================== 00:33:42.972 Latency(us) 00:33:42.972 Device Information : IOPS MiB/s Average min max 00:33:42.972 PCIE (0000:00:10.0) NSID 1 from core 2: 3623.17 14.15 4413.67 1042.11 13088.02 00:33:42.972 PCIE (0000:00:11.0) NSID 1 from core 2: 3623.17 14.15 4415.60 1046.80 13736.40 00:33:42.972 PCIE (0000:00:13.0) NSID 1 from core 2: 3623.17 14.15 4415.10 1068.25 16631.84 00:33:42.972 PCIE (0000:00:12.0) NSID 1 from core 2: 3623.17 14.15 4415.23 1048.43 13124.96 00:33:42.972 PCIE (0000:00:12.0) NSID 2 from core 2: 3623.17 14.15 4415.14 995.07 12904.63 00:33:42.972 PCIE (0000:00:12.0) NSID 3 from core 2: 3623.17 14.15 4415.05 978.99 12895.05 00:33:42.972 ======================================================== 00:33:42.972 Total : 21739.00 84.92 4414.97 978.99 16631.84 00:33:42.972 00:33:42.972 ************************************ 00:33:42.972 END TEST nvme_multi_secondary 00:33:42.972 ************************************ 00:33:42.972 13:54:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65607 00:33:42.972 13:54:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65608 00:33:42.972 00:33:42.972 real 0m11.071s 00:33:42.972 user 0m18.742s 00:33:42.972 sys 0m1.144s 00:33:42.972 13:54:40 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:42.972 13:54:40 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:33:42.972 13:54:40 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:33:42.972 13:54:40 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:33:42.972 13:54:40 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64528 ]] 00:33:42.972 13:54:40 nvme -- common/autotest_common.sh@1094 -- # kill 64528 00:33:42.972 13:54:40 nvme -- common/autotest_common.sh@1095 -- # wait 64528 00:33:42.972 [2024-11-20 13:54:40.218564] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.218639] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.218691] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.218717] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.222299] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.222360] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.222382] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.222407] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.226109] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.226164] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.226186] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.226210] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.229526] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.229582] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.229604] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:42.972 [2024-11-20 13:54:40.229628] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65474) is not found. Dropping the request. 00:33:43.231 13:54:40 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:33:43.231 13:54:40 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:33:43.231 13:54:40 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:33:43.231 13:54:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:43.231 13:54:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:43.231 13:54:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:33:43.231 ************************************ 00:33:43.231 START TEST bdev_nvme_reset_stuck_adm_cmd 00:33:43.231 ************************************ 00:33:43.231 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:33:43.491 * Looking for test storage... 00:33:43.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:43.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.491 --rc genhtml_branch_coverage=1 00:33:43.491 --rc genhtml_function_coverage=1 00:33:43.491 --rc genhtml_legend=1 00:33:43.491 --rc geninfo_all_blocks=1 00:33:43.491 --rc geninfo_unexecuted_blocks=1 00:33:43.491 00:33:43.491 ' 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:43.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.491 --rc genhtml_branch_coverage=1 00:33:43.491 --rc genhtml_function_coverage=1 00:33:43.491 --rc genhtml_legend=1 00:33:43.491 --rc geninfo_all_blocks=1 00:33:43.491 --rc geninfo_unexecuted_blocks=1 00:33:43.491 00:33:43.491 ' 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:43.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.491 --rc genhtml_branch_coverage=1 00:33:43.491 --rc genhtml_function_coverage=1 00:33:43.491 --rc genhtml_legend=1 00:33:43.491 --rc geninfo_all_blocks=1 00:33:43.491 --rc geninfo_unexecuted_blocks=1 00:33:43.491 00:33:43.491 ' 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:43.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.491 --rc genhtml_branch_coverage=1 00:33:43.491 --rc genhtml_function_coverage=1 00:33:43.491 --rc genhtml_legend=1 00:33:43.491 --rc geninfo_all_blocks=1 00:33:43.491 --rc geninfo_unexecuted_blocks=1 00:33:43.491 00:33:43.491 ' 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65770 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65770 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65770 ']' 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:43.491 13:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:33:43.751 [2024-11-20 13:54:40.897736] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:33:43.751 [2024-11-20 13:54:40.898144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65770 ] 00:33:44.010 [2024-11-20 13:54:41.126980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:44.010 [2024-11-20 13:54:41.291802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.010 [2024-11-20 13:54:41.291999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:44.010 [2024-11-20 13:54:41.292140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.010 [2024-11-20 13:54:41.292187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:44.947 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:44.947 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:33:44.947 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:33:44.947 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.947 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:33:45.206 nvme0n1 00:33:45.206 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.206 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:33:45.206 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_eVCha.txt 00:33:45.206 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:33:45.206 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.206 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:33:45.206 true 00:33:45.206 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.206 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:33:45.206 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732110882 00:33:45.206 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65798 00:33:45.206 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:45.206 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:33:45.206 13:54:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:33:47.119 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:33:47.119 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.119 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:33:47.119 [2024-11-20 13:54:44.335921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:33:47.119 [2024-11-20 13:54:44.336472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:47.119 [2024-11-20 13:54:44.336528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:33:47.119 [2024-11-20 13:54:44.336549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.119 [2024-11-20 13:54:44.338638] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:33:47.119 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65798 00:33:47.119 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.120 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65798 00:33:47.120 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65798 00:33:47.120 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:33:47.120 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:33:47.120 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.120 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.120 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:33:47.120 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.120 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:33:47.120 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_eVCha.txt 00:33:47.120 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:33:47.120 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:33:47.120 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:33:47.120 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_eVCha.txt 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65770 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65770 ']' 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65770 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65770 00:33:47.379 killing process with pid 65770 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65770' 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65770 00:33:47.379 13:54:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65770 00:33:49.913 13:54:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:33:49.913 13:54:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:33:49.913 ************************************ 00:33:49.913 END TEST bdev_nvme_reset_stuck_adm_cmd 00:33:49.913 ************************************ 00:33:49.913 00:33:49.913 real 0m6.673s 00:33:49.913 user 0m23.381s 00:33:49.913 sys 0m0.827s 00:33:49.913 13:54:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:49.913 13:54:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:33:49.913 13:54:47 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:33:49.913 13:54:47 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:33:49.913 13:54:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:49.913 13:54:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:49.913 13:54:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:33:49.913 ************************************ 00:33:49.913 START TEST nvme_fio 00:33:49.913 ************************************ 00:33:49.913 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:33:49.913 13:54:47 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:33:49.913 13:54:47 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:33:49.913 13:54:47 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:33:49.913 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:49.913 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:33:49.913 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:49.913 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:49.913 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:50.173 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:33:50.173 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:33:50.173 13:54:47 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:33:50.173 13:54:47 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:33:50.173 13:54:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:33:50.173 13:54:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:33:50.173 13:54:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:33:50.431 13:54:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:33:50.431 13:54:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:33:50.690 13:54:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:33:50.690 13:54:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:33:50.690 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:33:50.690 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:50.690 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:50.690 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:50.690 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:50.690 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:33:50.690 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:50.690 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:50.690 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:50.691 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:50.691 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:33:50.691 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:50.691 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:50.691 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:33:50.691 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:50.691 13:54:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:33:50.967 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:50.967 fio-3.35 00:33:50.967 Starting 1 thread 00:33:54.261 00:33:54.261 test: (groupid=0, jobs=1): err= 0: pid=65953: Wed Nov 20 13:54:51 2024 00:33:54.261 read: IOPS=19.2k, BW=74.9MiB/s (78.6MB/s)(150MiB/2001msec) 00:33:54.261 slat (nsec): min=4414, max=86781, avg=5601.95, stdev=2643.06 00:33:54.261 clat (usec): min=279, max=8936, avg=3316.45, stdev=717.62 00:33:54.261 lat (usec): min=284, max=9023, avg=3322.05, stdev=719.65 00:33:54.261 clat percentiles (usec): 00:33:54.261 | 1.00th=[ 2278], 5.00th=[ 2835], 10.00th=[ 2966], 20.00th=[ 3032], 00:33:54.261 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3228], 00:33:54.261 | 70.00th=[ 3294], 80.00th=[ 3392], 90.00th=[ 3884], 95.00th=[ 4146], 00:33:54.261 | 99.00th=[ 7439], 99.50th=[ 7570], 99.90th=[ 8455], 99.95th=[ 8455], 00:33:54.261 | 99.99th=[ 8717] 00:33:54.261 bw ( KiB/s): min=71944, max=82088, per=100.00%, avg=77749.33, stdev=5228.62, samples=3 00:33:54.261 iops : min=17986, max=20522, avg=19437.33, stdev=1307.16, samples=3 00:33:54.261 write: IOPS=19.2k, BW=74.9MiB/s (78.5MB/s)(150MiB/2001msec); 0 zone resets 00:33:54.261 slat (nsec): min=4514, max=61279, avg=5799.50, stdev=2685.62 00:33:54.261 clat (usec): min=221, max=8760, avg=3327.55, stdev=728.75 00:33:54.261 lat (usec): min=226, max=8781, avg=3333.35, stdev=730.81 00:33:54.261 clat percentiles (usec): 00:33:54.261 | 1.00th=[ 2245], 5.00th=[ 2835], 10.00th=[ 2966], 20.00th=[ 3032], 00:33:54.261 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3228], 00:33:54.261 | 70.00th=[ 3294], 80.00th=[ 3392], 90.00th=[ 3916], 95.00th=[ 4178], 00:33:54.261 | 99.00th=[ 7439], 99.50th=[ 7570], 99.90th=[ 8455], 99.95th=[ 8455], 00:33:54.261 | 99.99th=[ 8586] 00:33:54.261 bw ( KiB/s): min=71840, max=82264, per=100.00%, avg=77925.33, stdev=5427.07, samples=3 00:33:54.261 iops : min=17960, max=20566, avg=19481.33, stdev=1356.77, samples=3 00:33:54.261 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:33:54.261 lat (msec) : 2=0.51%, 4=91.12%, 10=8.32% 00:33:54.261 cpu : usr=99.20%, sys=0.15%, ctx=6, majf=0, minf=607 00:33:54.261 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:54.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:54.261 issued rwts: total=38386,38352,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:54.261 00:33:54.261 Run status group 0 (all jobs): 00:33:54.261 READ: bw=74.9MiB/s (78.6MB/s), 74.9MiB/s-74.9MiB/s (78.6MB/s-78.6MB/s), io=150MiB (157MB), run=2001-2001msec 00:33:54.261 WRITE: bw=74.9MiB/s (78.5MB/s), 74.9MiB/s-74.9MiB/s (78.5MB/s-78.5MB/s), io=150MiB (157MB), run=2001-2001msec 00:33:54.520 ----------------------------------------------------- 00:33:54.520 Suppressions used: 00:33:54.520 count bytes template 00:33:54.520 1 32 /usr/src/fio/parse.c 00:33:54.520 1 8 libtcmalloc_minimal.so 00:33:54.520 ----------------------------------------------------- 00:33:54.520 00:33:54.520 13:54:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:33:54.520 13:54:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:33:54.520 13:54:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:33:54.520 13:54:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:33:54.780 13:54:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:33:54.780 13:54:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:33:55.040 13:54:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:33:55.040 13:54:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:33:55.040 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:33:55.040 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:55.040 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:55.040 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:55.040 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:55.040 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:33:55.040 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:55.040 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:55.040 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:55.040 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:33:55.040 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:55.300 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:55.300 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:55.300 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:33:55.300 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:55.300 13:54:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:33:55.300 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:55.300 fio-3.35 00:33:55.300 Starting 1 thread 00:33:59.492 00:33:59.492 test: (groupid=0, jobs=1): err= 0: pid=66019: Wed Nov 20 13:54:55 2024 00:33:59.492 read: IOPS=17.7k, BW=69.1MiB/s (72.5MB/s)(138MiB/2001msec) 00:33:59.492 slat (nsec): min=4135, max=58537, avg=5692.26, stdev=1554.21 00:33:59.492 clat (usec): min=263, max=10071, avg=3598.71, stdev=532.64 00:33:59.492 lat (usec): min=268, max=10130, avg=3604.40, stdev=533.33 00:33:59.492 clat percentiles (usec): 00:33:59.492 | 1.00th=[ 2507], 5.00th=[ 3097], 10.00th=[ 3163], 20.00th=[ 3261], 00:33:59.492 | 30.00th=[ 3326], 40.00th=[ 3392], 50.00th=[ 3458], 60.00th=[ 3589], 00:33:59.492 | 70.00th=[ 3884], 80.00th=[ 3982], 90.00th=[ 4080], 95.00th=[ 4178], 00:33:59.492 | 99.00th=[ 5342], 99.50th=[ 6718], 99.90th=[ 8225], 99.95th=[ 8848], 00:33:59.492 | 99.99th=[10028] 00:33:59.492 bw ( KiB/s): min=63376, max=75848, per=98.24%, avg=69532.33, stdev=6237.53, samples=3 00:33:59.492 iops : min=15844, max=18962, avg=17383.00, stdev=1559.38, samples=3 00:33:59.492 write: IOPS=17.7k, BW=69.1MiB/s (72.5MB/s)(138MiB/2001msec); 0 zone resets 00:33:59.492 slat (nsec): min=4207, max=43409, avg=5896.61, stdev=1481.27 00:33:59.492 clat (usec): min=321, max=9998, avg=3604.24, stdev=526.34 00:33:59.492 lat (usec): min=327, max=10018, avg=3610.14, stdev=526.99 00:33:59.492 clat percentiles (usec): 00:33:59.492 | 1.00th=[ 2540], 5.00th=[ 3097], 10.00th=[ 3163], 20.00th=[ 3261], 00:33:59.492 | 30.00th=[ 3326], 40.00th=[ 3392], 50.00th=[ 3490], 60.00th=[ 3589], 00:33:59.492 | 70.00th=[ 3884], 80.00th=[ 3982], 90.00th=[ 4080], 95.00th=[ 4178], 00:33:59.492 | 99.00th=[ 5342], 99.50th=[ 6718], 99.90th=[ 8225], 99.95th=[ 8848], 00:33:59.492 | 99.99th=[ 9896] 00:33:59.492 bw ( KiB/s): min=63120, max=76096, per=98.25%, avg=69513.67, stdev=6490.06, samples=3 00:33:59.492 iops : min=15780, max=19024, avg=17378.33, stdev=1622.52, samples=3 00:33:59.492 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:33:59.492 lat (msec) : 2=0.27%, 4=81.57%, 10=18.12%, 20=0.01% 00:33:59.492 cpu : usr=99.20%, sys=0.15%, ctx=4, majf=0, minf=608 00:33:59.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:59.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.492 issued rwts: total=35405,35395,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.492 00:33:59.492 Run status group 0 (all jobs): 00:33:59.492 READ: bw=69.1MiB/s (72.5MB/s), 69.1MiB/s-69.1MiB/s (72.5MB/s-72.5MB/s), io=138MiB (145MB), run=2001-2001msec 00:33:59.492 WRITE: bw=69.1MiB/s (72.5MB/s), 69.1MiB/s-69.1MiB/s (72.5MB/s-72.5MB/s), io=138MiB (145MB), run=2001-2001msec 00:33:59.492 ----------------------------------------------------- 00:33:59.492 Suppressions used: 00:33:59.492 count bytes template 00:33:59.492 1 32 /usr/src/fio/parse.c 00:33:59.492 1 8 libtcmalloc_minimal.so 00:33:59.492 ----------------------------------------------------- 00:33:59.492 00:33:59.492 13:54:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:33:59.492 13:54:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:33:59.492 13:54:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:33:59.492 13:54:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:33:59.492 13:54:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:33:59.492 13:54:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:33:59.492 13:54:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:33:59.492 13:54:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:33:59.492 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:33:59.492 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:59.492 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:59.492 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:59.492 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:59.492 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:33:59.492 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:59.492 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:59.492 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:33:59.492 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:59.492 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:59.752 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:59.752 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:59.752 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:33:59.752 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:59.752 13:54:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:33:59.752 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:59.752 fio-3.35 00:33:59.752 Starting 1 thread 00:34:03.067 00:34:03.067 test: (groupid=0, jobs=1): err= 0: pid=66085: Wed Nov 20 13:55:00 2024 00:34:03.067 read: IOPS=15.1k, BW=59.0MiB/s (61.9MB/s)(118MiB/2001msec) 00:34:03.067 slat (usec): min=4, max=101, avg= 6.84, stdev= 2.55 00:34:03.068 clat (usec): min=471, max=10602, avg=4208.91, stdev=990.52 00:34:03.068 lat (usec): min=478, max=10608, avg=4215.75, stdev=991.74 00:34:03.068 clat percentiles (usec): 00:34:03.068 | 1.00th=[ 2311], 5.00th=[ 3130], 10.00th=[ 3228], 20.00th=[ 3359], 00:34:03.068 | 30.00th=[ 3752], 40.00th=[ 4015], 50.00th=[ 4146], 60.00th=[ 4228], 00:34:03.068 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 5276], 95.00th=[ 6194], 00:34:03.068 | 99.00th=[ 7963], 99.50th=[ 8356], 99.90th=[ 9372], 99.95th=[10028], 00:34:03.068 | 99.99th=[10552] 00:34:03.068 bw ( KiB/s): min=56488, max=67448, per=100.00%, avg=61909.33, stdev=5480.94, samples=3 00:34:03.068 iops : min=14122, max=16862, avg=15477.33, stdev=1370.24, samples=3 00:34:03.068 write: IOPS=15.1k, BW=59.1MiB/s (62.0MB/s)(118MiB/2001msec); 0 zone resets 00:34:03.068 slat (nsec): min=4488, max=55464, avg=7108.73, stdev=2631.79 00:34:03.068 clat (usec): min=388, max=10616, avg=4221.52, stdev=992.46 00:34:03.068 lat (usec): min=397, max=10623, avg=4228.63, stdev=993.71 00:34:03.068 clat percentiles (usec): 00:34:03.068 | 1.00th=[ 2343], 5.00th=[ 3130], 10.00th=[ 3228], 20.00th=[ 3359], 00:34:03.068 | 30.00th=[ 3785], 40.00th=[ 4047], 50.00th=[ 4146], 60.00th=[ 4228], 00:34:03.068 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 5342], 95.00th=[ 6194], 00:34:03.068 | 99.00th=[ 7963], 99.50th=[ 8356], 99.90th=[ 9503], 99.95th=[ 9896], 00:34:03.068 | 99.99th=[10552] 00:34:03.068 bw ( KiB/s): min=55560, max=67592, per=100.00%, avg=61501.33, stdev=6017.39, samples=3 00:34:03.068 iops : min=13890, max=16898, avg=15375.33, stdev=1504.35, samples=3 00:34:03.068 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:34:03.068 lat (msec) : 2=0.56%, 4=37.09%, 10=62.26%, 20=0.05% 00:34:03.068 cpu : usr=98.80%, sys=0.05%, ctx=3, majf=0, minf=607 00:34:03.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:34:03.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:03.068 issued rwts: total=30236,30265,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:03.068 00:34:03.068 Run status group 0 (all jobs): 00:34:03.068 READ: bw=59.0MiB/s (61.9MB/s), 59.0MiB/s-59.0MiB/s (61.9MB/s-61.9MB/s), io=118MiB (124MB), run=2001-2001msec 00:34:03.068 WRITE: bw=59.1MiB/s (62.0MB/s), 59.1MiB/s-59.1MiB/s (62.0MB/s-62.0MB/s), io=118MiB (124MB), run=2001-2001msec 00:34:03.327 ----------------------------------------------------- 00:34:03.327 Suppressions used: 00:34:03.327 count bytes template 00:34:03.327 1 32 /usr/src/fio/parse.c 00:34:03.327 1 8 libtcmalloc_minimal.so 00:34:03.327 ----------------------------------------------------- 00:34:03.327 00:34:03.327 13:55:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:34:03.327 13:55:00 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:34:03.327 13:55:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:34:03.327 13:55:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:34:03.585 13:55:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:34:03.585 13:55:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:34:03.844 13:55:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:34:03.844 13:55:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:34:03.844 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:34:03.844 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:03.844 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:03.844 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:03.844 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:34:03.844 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:34:03.844 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:03.844 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:03.844 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:03.844 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:34:03.844 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:34:04.103 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:04.103 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:04.103 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:34:04.103 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:34:04.103 13:55:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:34:04.103 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:04.103 fio-3.35 00:34:04.103 Starting 1 thread 00:34:08.292 00:34:08.292 test: (groupid=0, jobs=1): err= 0: pid=66146: Wed Nov 20 13:55:05 2024 00:34:08.292 read: IOPS=14.7k, BW=57.6MiB/s (60.4MB/s)(115MiB/2001msec) 00:34:08.292 slat (usec): min=4, max=385, avg= 6.82, stdev= 3.27 00:34:08.292 clat (usec): min=243, max=10342, avg=4307.87, stdev=726.14 00:34:08.292 lat (usec): min=248, max=10373, avg=4314.69, stdev=726.82 00:34:08.292 clat percentiles (usec): 00:34:08.292 | 1.00th=[ 2868], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 4015], 00:34:08.292 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:34:08.292 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 5800], 00:34:08.292 | 99.00th=[ 7308], 99.50th=[ 7767], 99.90th=[ 8979], 99.95th=[ 9503], 00:34:08.292 | 99.99th=[10290] 00:34:08.292 bw ( KiB/s): min=56096, max=59848, per=98.43%, avg=58069.33, stdev=1883.56, samples=3 00:34:08.292 iops : min=14024, max=14962, avg=14517.33, stdev=470.89, samples=3 00:34:08.292 write: IOPS=14.8k, BW=57.7MiB/s (60.5MB/s)(115MiB/2001msec); 0 zone resets 00:34:08.292 slat (usec): min=4, max=955, avg= 7.16, stdev= 6.66 00:34:08.292 clat (usec): min=281, max=10210, avg=4331.57, stdev=719.75 00:34:08.292 lat (usec): min=286, max=10225, avg=4338.73, stdev=720.45 00:34:08.292 clat percentiles (usec): 00:34:08.292 | 1.00th=[ 3032], 5.00th=[ 3458], 10.00th=[ 3785], 20.00th=[ 4015], 00:34:08.292 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:34:08.292 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5866], 00:34:08.292 | 99.00th=[ 7308], 99.50th=[ 7767], 99.90th=[ 9110], 99.95th=[ 9503], 00:34:08.292 | 99.99th=[10028] 00:34:08.292 bw ( KiB/s): min=56376, max=59032, per=98.16%, avg=57968.00, stdev=1404.52, samples=3 00:34:08.292 iops : min=14094, max=14758, avg=14492.00, stdev=351.13, samples=3 00:34:08.292 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:34:08.292 lat (msec) : 2=0.17%, 4=19.23%, 10=80.53%, 20=0.02% 00:34:08.292 cpu : usr=98.20%, sys=0.30%, ctx=30, majf=0, minf=605 00:34:08.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:34:08.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:08.292 issued rwts: total=29511,29542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:08.292 00:34:08.292 Run status group 0 (all jobs): 00:34:08.292 READ: bw=57.6MiB/s (60.4MB/s), 57.6MiB/s-57.6MiB/s (60.4MB/s-60.4MB/s), io=115MiB (121MB), run=2001-2001msec 00:34:08.292 WRITE: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=115MiB (121MB), run=2001-2001msec 00:34:08.552 ----------------------------------------------------- 00:34:08.552 Suppressions used: 00:34:08.552 count bytes template 00:34:08.552 1 32 /usr/src/fio/parse.c 00:34:08.552 1 8 libtcmalloc_minimal.so 00:34:08.552 ----------------------------------------------------- 00:34:08.552 00:34:08.552 13:55:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:34:08.552 13:55:05 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:34:08.552 00:34:08.552 real 0m18.454s 00:34:08.552 user 0m14.615s 00:34:08.552 sys 0m2.387s 00:34:08.552 ************************************ 00:34:08.552 END TEST nvme_fio 00:34:08.552 ************************************ 00:34:08.552 13:55:05 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.552 13:55:05 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:34:08.552 ************************************ 00:34:08.552 END TEST nvme 00:34:08.552 ************************************ 00:34:08.552 00:34:08.552 real 1m34.747s 00:34:08.552 user 3m46.511s 00:34:08.552 sys 0m22.073s 00:34:08.552 13:55:05 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.552 13:55:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:34:08.552 13:55:05 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:34:08.552 13:55:05 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:34:08.552 13:55:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:08.552 13:55:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:08.552 13:55:05 -- common/autotest_common.sh@10 -- # set +x 00:34:08.552 ************************************ 00:34:08.552 START TEST nvme_scc 00:34:08.552 ************************************ 00:34:08.552 13:55:05 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:34:08.552 * Looking for test storage... 00:34:08.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:34:08.812 13:55:05 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:08.812 13:55:05 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:34:08.812 13:55:05 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:08.812 13:55:05 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@345 -- # : 1 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:08.812 13:55:05 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@368 -- # return 0 00:34:08.812 13:55:06 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:08.812 13:55:06 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:08.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.812 --rc genhtml_branch_coverage=1 00:34:08.812 --rc genhtml_function_coverage=1 00:34:08.812 --rc genhtml_legend=1 00:34:08.812 --rc geninfo_all_blocks=1 00:34:08.812 --rc geninfo_unexecuted_blocks=1 00:34:08.812 00:34:08.812 ' 00:34:08.812 13:55:06 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:08.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.812 --rc genhtml_branch_coverage=1 00:34:08.812 --rc genhtml_function_coverage=1 00:34:08.812 --rc genhtml_legend=1 00:34:08.812 --rc geninfo_all_blocks=1 00:34:08.812 --rc geninfo_unexecuted_blocks=1 00:34:08.812 00:34:08.812 ' 00:34:08.812 13:55:06 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:08.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.812 --rc genhtml_branch_coverage=1 00:34:08.812 --rc genhtml_function_coverage=1 00:34:08.812 --rc genhtml_legend=1 00:34:08.812 --rc geninfo_all_blocks=1 00:34:08.812 --rc geninfo_unexecuted_blocks=1 00:34:08.812 00:34:08.812 ' 00:34:08.812 13:55:06 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:08.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.812 --rc genhtml_branch_coverage=1 00:34:08.812 --rc genhtml_function_coverage=1 00:34:08.812 --rc genhtml_legend=1 00:34:08.812 --rc geninfo_all_blocks=1 00:34:08.812 --rc geninfo_unexecuted_blocks=1 00:34:08.812 00:34:08.812 ' 00:34:08.812 13:55:06 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:34:08.812 13:55:06 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:34:08.812 13:55:06 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:34:08.812 13:55:06 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:34:08.812 13:55:06 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.812 13:55:06 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.812 13:55:06 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.812 13:55:06 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.812 13:55:06 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.812 13:55:06 nvme_scc -- paths/export.sh@5 -- # export PATH 00:34:08.812 13:55:06 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.812 13:55:06 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:34:08.812 13:55:06 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:34:08.812 13:55:06 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:34:08.812 13:55:06 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:34:08.812 13:55:06 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:34:08.812 13:55:06 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:34:08.812 13:55:06 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:34:08.812 13:55:06 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:34:08.812 13:55:06 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:34:08.812 13:55:06 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:08.812 13:55:06 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:34:08.812 13:55:06 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:34:08.812 13:55:06 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:34:08.812 13:55:06 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:09.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:09.381 Waiting for block devices as requested 00:34:09.639 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:09.639 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:09.897 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:09.897 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:15.175 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:15.175 13:55:12 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:34:15.175 13:55:12 nvme_scc -- scripts/common.sh@18 -- # local i 00:34:15.175 13:55:12 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:34:15.175 13:55:12 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:34:15.175 13:55:12 nvme_scc -- scripts/common.sh@27 -- # return 0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.175 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.176 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.177 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.178 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:15.179 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.180 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:34:15.181 13:55:12 nvme_scc -- scripts/common.sh@18 -- # local i 00:34:15.181 13:55:12 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:34:15.181 13:55:12 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:34:15.181 13:55:12 nvme_scc -- scripts/common.sh@27 -- # return 0 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.181 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:34:15.182 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.446 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.447 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.448 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.449 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:34:15.450 13:55:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:34:15.451 13:55:12 nvme_scc -- scripts/common.sh@18 -- # local i 00:34:15.451 13:55:12 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:34:15.451 13:55:12 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:34:15.451 13:55:12 nvme_scc -- scripts/common.sh@27 -- # return 0 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.451 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.452 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:34:15.453 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.454 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:34:15.718 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:15.719 13:55:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.720 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:34:15.721 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:15.722 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.723 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:34:15.724 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.725 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.726 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:15.727 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:34:15.990 13:55:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:34:15.990 13:55:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:34:15.990 13:55:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:34:15.990 13:55:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.990 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:34:15.991 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:34:15.992 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:34:15.993 13:55:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:34:15.993 13:55:13 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:34:15.993 13:55:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:34:15.993 13:55:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:34:15.993 13:55:13 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:16.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:17.131 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:34:17.131 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:34:17.389 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:34:17.389 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:34:17.389 13:55:14 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:34:17.389 13:55:14 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:17.389 13:55:14 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:17.389 13:55:14 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:34:17.389 ************************************ 00:34:17.389 START TEST nvme_simple_copy 00:34:17.389 ************************************ 00:34:17.389 13:55:14 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:34:17.956 Initializing NVMe Controllers 00:34:17.956 Attaching to 0000:00:10.0 00:34:17.956 Controller supports SCC. Attached to 0000:00:10.0 00:34:17.956 Namespace ID: 1 size: 6GB 00:34:17.956 Initialization complete. 00:34:17.956 00:34:17.956 Controller QEMU NVMe Ctrl (12340 ) 00:34:17.956 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:34:17.956 Namespace Block Size:4096 00:34:17.956 Writing LBAs 0 to 63 with Random Data 00:34:17.956 Copied LBAs from 0 - 63 to the Destination LBA 256 00:34:17.956 LBAs matching Written Data: 64 00:34:17.956 00:34:17.956 ************************************ 00:34:17.956 END TEST nvme_simple_copy 00:34:17.956 ************************************ 00:34:17.956 real 0m0.366s 00:34:17.956 user 0m0.151s 00:34:17.956 sys 0m0.112s 00:34:17.956 13:55:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.956 13:55:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:34:17.956 ************************************ 00:34:17.956 END TEST nvme_scc 00:34:17.956 ************************************ 00:34:17.956 00:34:17.956 real 0m9.320s 00:34:17.956 user 0m1.912s 00:34:17.956 sys 0m2.292s 00:34:17.956 13:55:15 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.956 13:55:15 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:34:17.956 13:55:15 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:34:17.956 13:55:15 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:34:17.956 13:55:15 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:34:17.956 13:55:15 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:34:17.956 13:55:15 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:34:17.956 13:55:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:17.956 13:55:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:17.956 13:55:15 -- common/autotest_common.sh@10 -- # set +x 00:34:17.956 ************************************ 00:34:17.956 START TEST nvme_fdp 00:34:17.956 ************************************ 00:34:17.956 13:55:15 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:34:17.956 * Looking for test storage... 00:34:17.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:34:17.956 13:55:15 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:17.956 13:55:15 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:34:17.956 13:55:15 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:18.216 13:55:15 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:18.216 13:55:15 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:18.216 13:55:15 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:18.216 13:55:15 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:18.216 13:55:15 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:34:18.216 13:55:15 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:34:18.216 13:55:15 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:34:18.216 13:55:15 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:34:18.216 13:55:15 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:34:18.217 13:55:15 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:18.217 13:55:15 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:18.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.217 --rc genhtml_branch_coverage=1 00:34:18.217 --rc genhtml_function_coverage=1 00:34:18.217 --rc genhtml_legend=1 00:34:18.217 --rc geninfo_all_blocks=1 00:34:18.217 --rc geninfo_unexecuted_blocks=1 00:34:18.217 00:34:18.217 ' 00:34:18.217 13:55:15 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:18.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.217 --rc genhtml_branch_coverage=1 00:34:18.217 --rc genhtml_function_coverage=1 00:34:18.217 --rc genhtml_legend=1 00:34:18.217 --rc geninfo_all_blocks=1 00:34:18.217 --rc geninfo_unexecuted_blocks=1 00:34:18.217 00:34:18.217 ' 00:34:18.217 13:55:15 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:18.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.217 --rc genhtml_branch_coverage=1 00:34:18.217 --rc genhtml_function_coverage=1 00:34:18.217 --rc genhtml_legend=1 00:34:18.217 --rc geninfo_all_blocks=1 00:34:18.217 --rc geninfo_unexecuted_blocks=1 00:34:18.217 00:34:18.217 ' 00:34:18.217 13:55:15 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:18.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.217 --rc genhtml_branch_coverage=1 00:34:18.217 --rc genhtml_function_coverage=1 00:34:18.217 --rc genhtml_legend=1 00:34:18.217 --rc geninfo_all_blocks=1 00:34:18.217 --rc geninfo_unexecuted_blocks=1 00:34:18.217 00:34:18.217 ' 00:34:18.217 13:55:15 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:34:18.217 13:55:15 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:34:18.217 13:55:15 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:34:18.217 13:55:15 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:34:18.217 13:55:15 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.217 13:55:15 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.217 13:55:15 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.217 13:55:15 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.217 13:55:15 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.217 13:55:15 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:34:18.217 13:55:15 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.217 13:55:15 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:34:18.217 13:55:15 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:34:18.217 13:55:15 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:34:18.217 13:55:15 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:34:18.217 13:55:15 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:34:18.217 13:55:15 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:34:18.217 13:55:15 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:34:18.217 13:55:15 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:34:18.217 13:55:15 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:34:18.217 13:55:15 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:18.217 13:55:15 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:18.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:18.784 Waiting for block devices as requested 00:34:18.784 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:19.043 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:19.043 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:19.302 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:24.587 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:24.587 13:55:21 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:34:24.587 13:55:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:34:24.587 13:55:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:34:24.587 13:55:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:34:24.587 13:55:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.587 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:34:24.588 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:34:24.589 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.590 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:24.591 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.592 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:34:24.593 13:55:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:34:24.593 13:55:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:34:24.593 13:55:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:34:24.593 13:55:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:34:24.593 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:34:24.594 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.595 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.596 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.597 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.598 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:34:24.599 13:55:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:34:24.599 13:55:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:34:24.599 13:55:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:34:24.599 13:55:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:34:24.599 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.600 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.868 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.869 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.870 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:34:24.871 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:24.872 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:24.873 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:24.873 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:24.873 13:55:21 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.873 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.874 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.875 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.876 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:34:24.877 13:55:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:24.878 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:34:24.879 13:55:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:34:24.879 13:55:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:34:24.879 13:55:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:34:24.879 13:55:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.879 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.880 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:34:24.881 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:34:24.882 13:55:22 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:34:24.882 13:55:22 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:34:25.142 13:55:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:34:25.143 13:55:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:34:25.143 13:55:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:34:25.143 13:55:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:34:25.143 13:55:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:34:25.143 13:55:22 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:34:25.143 13:55:22 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:34:25.143 13:55:22 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:34:25.143 13:55:22 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:34:25.143 13:55:22 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:34:25.143 13:55:22 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:25.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:26.277 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:34:26.277 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:34:26.277 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:34:26.277 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:34:26.535 13:55:23 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:34:26.535 13:55:23 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:26.535 13:55:23 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.535 13:55:23 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:34:26.535 ************************************ 00:34:26.535 START TEST nvme_flexible_data_placement 00:34:26.535 ************************************ 00:34:26.535 13:55:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:34:26.795 Initializing NVMe Controllers 00:34:26.795 Attaching to 0000:00:13.0 00:34:26.795 Controller supports FDP Attached to 0000:00:13.0 00:34:26.795 Namespace ID: 1 Endurance Group ID: 1 00:34:26.795 Initialization complete. 00:34:26.795 00:34:26.795 ================================== 00:34:26.795 == FDP tests for Namespace: #01 == 00:34:26.795 ================================== 00:34:26.795 00:34:26.795 Get Feature: FDP: 00:34:26.795 ================= 00:34:26.795 Enabled: Yes 00:34:26.795 FDP configuration Index: 0 00:34:26.795 00:34:26.795 FDP configurations log page 00:34:26.795 =========================== 00:34:26.795 Number of FDP configurations: 1 00:34:26.795 Version: 0 00:34:26.795 Size: 112 00:34:26.795 FDP Configuration Descriptor: 0 00:34:26.795 Descriptor Size: 96 00:34:26.795 Reclaim Group Identifier format: 2 00:34:26.795 FDP Volatile Write Cache: Not Present 00:34:26.795 FDP Configuration: Valid 00:34:26.795 Vendor Specific Size: 0 00:34:26.795 Number of Reclaim Groups: 2 00:34:26.795 Number of Recalim Unit Handles: 8 00:34:26.795 Max Placement Identifiers: 128 00:34:26.795 Number of Namespaces Suppprted: 256 00:34:26.795 Reclaim unit Nominal Size: 6000000 bytes 00:34:26.795 Estimated Reclaim Unit Time Limit: Not Reported 00:34:26.795 RUH Desc #000: RUH Type: Initially Isolated 00:34:26.795 RUH Desc #001: RUH Type: Initially Isolated 00:34:26.795 RUH Desc #002: RUH Type: Initially Isolated 00:34:26.795 RUH Desc #003: RUH Type: Initially Isolated 00:34:26.795 RUH Desc #004: RUH Type: Initially Isolated 00:34:26.795 RUH Desc #005: RUH Type: Initially Isolated 00:34:26.795 RUH Desc #006: RUH Type: Initially Isolated 00:34:26.795 RUH Desc #007: RUH Type: Initially Isolated 00:34:26.795 00:34:26.795 FDP reclaim unit handle usage log page 00:34:26.795 ====================================== 00:34:26.795 Number of Reclaim Unit Handles: 8 00:34:26.795 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:34:26.795 RUH Usage Desc #001: RUH Attributes: Unused 00:34:26.795 RUH Usage Desc #002: RUH Attributes: Unused 00:34:26.795 RUH Usage Desc #003: RUH Attributes: Unused 00:34:26.795 RUH Usage Desc #004: RUH Attributes: Unused 00:34:26.795 RUH Usage Desc #005: RUH Attributes: Unused 00:34:26.795 RUH Usage Desc #006: RUH Attributes: Unused 00:34:26.795 RUH Usage Desc #007: RUH Attributes: Unused 00:34:26.795 00:34:26.795 FDP statistics log page 00:34:26.795 ======================= 00:34:26.795 Host bytes with metadata written: 790499328 00:34:26.795 Media bytes with metadata written: 790573056 00:34:26.795 Media bytes erased: 0 00:34:26.795 00:34:26.795 FDP Reclaim unit handle status 00:34:26.795 ============================== 00:34:26.795 Number of RUHS descriptors: 2 00:34:26.795 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000e1f 00:34:26.795 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:34:26.795 00:34:26.795 FDP write on placement id: 0 success 00:34:26.795 00:34:26.795 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:34:26.795 00:34:26.795 IO mgmt send: RUH update for Placement ID: #0 Success 00:34:26.795 00:34:26.795 Get Feature: FDP Events for Placement handle: #0 00:34:26.795 ======================== 00:34:26.795 Number of FDP Events: 6 00:34:26.795 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:34:26.795 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:34:26.795 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:34:26.795 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:34:26.795 FDP Event: #4 Type: Media Reallocated Enabled: No 00:34:26.795 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:34:26.795 00:34:26.795 FDP events log page 00:34:26.795 =================== 00:34:26.795 Number of FDP events: 1 00:34:26.795 FDP Event #0: 00:34:26.795 Event Type: RU Not Written to Capacity 00:34:26.795 Placement Identifier: Valid 00:34:26.795 NSID: Valid 00:34:26.795 Location: Valid 00:34:26.795 Placement Identifier: 0 00:34:26.795 Event Timestamp: b 00:34:26.795 Namespace Identifier: 1 00:34:26.796 Reclaim Group Identifier: 0 00:34:26.796 Reclaim Unit Handle Identifier: 0 00:34:26.796 00:34:26.796 FDP test passed 00:34:26.796 00:34:26.796 real 0m0.338s 00:34:26.796 user 0m0.118s 00:34:26.796 sys 0m0.117s 00:34:26.796 13:55:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.796 13:55:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:34:26.796 ************************************ 00:34:26.796 END TEST nvme_flexible_data_placement 00:34:26.796 ************************************ 00:34:26.796 00:34:26.796 real 0m8.857s 00:34:26.796 user 0m1.623s 00:34:26.796 sys 0m2.241s 00:34:26.796 13:55:24 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.796 13:55:24 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:34:26.796 ************************************ 00:34:26.796 END TEST nvme_fdp 00:34:26.796 ************************************ 00:34:26.796 13:55:24 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:34:26.796 13:55:24 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:34:26.796 13:55:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:26.796 13:55:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.796 13:55:24 -- common/autotest_common.sh@10 -- # set +x 00:34:26.796 ************************************ 00:34:26.796 START TEST nvme_rpc 00:34:26.796 ************************************ 00:34:26.796 13:55:24 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:34:27.055 * Looking for test storage... 00:34:27.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:34:27.055 13:55:24 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:27.055 13:55:24 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:27.055 13:55:24 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:34:27.055 13:55:24 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.055 13:55:24 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:34:27.055 13:55:24 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.055 13:55:24 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:27.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.055 --rc genhtml_branch_coverage=1 00:34:27.055 --rc genhtml_function_coverage=1 00:34:27.055 --rc genhtml_legend=1 00:34:27.055 --rc geninfo_all_blocks=1 00:34:27.055 --rc geninfo_unexecuted_blocks=1 00:34:27.055 00:34:27.055 ' 00:34:27.055 13:55:24 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:27.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.055 --rc genhtml_branch_coverage=1 00:34:27.055 --rc genhtml_function_coverage=1 00:34:27.055 --rc genhtml_legend=1 00:34:27.056 --rc geninfo_all_blocks=1 00:34:27.056 --rc geninfo_unexecuted_blocks=1 00:34:27.056 00:34:27.056 ' 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:27.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.056 --rc genhtml_branch_coverage=1 00:34:27.056 --rc genhtml_function_coverage=1 00:34:27.056 --rc genhtml_legend=1 00:34:27.056 --rc geninfo_all_blocks=1 00:34:27.056 --rc geninfo_unexecuted_blocks=1 00:34:27.056 00:34:27.056 ' 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:27.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.056 --rc genhtml_branch_coverage=1 00:34:27.056 --rc genhtml_function_coverage=1 00:34:27.056 --rc genhtml_legend=1 00:34:27.056 --rc geninfo_all_blocks=1 00:34:27.056 --rc geninfo_unexecuted_blocks=1 00:34:27.056 00:34:27.056 ' 00:34:27.056 13:55:24 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:27.056 13:55:24 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:34:27.056 13:55:24 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:34:27.056 13:55:24 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67563 00:34:27.056 13:55:24 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:34:27.056 13:55:24 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:34:27.056 13:55:24 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67563 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67563 ']' 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:27.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:27.056 13:55:24 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:34:27.315 [2024-11-20 13:55:24.464774] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:34:27.315 [2024-11-20 13:55:24.464900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67563 ] 00:34:27.574 [2024-11-20 13:55:24.647877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:27.574 [2024-11-20 13:55:24.827292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.574 [2024-11-20 13:55:24.827318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.514 13:55:25 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:28.514 13:55:25 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:34:28.514 13:55:25 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:34:28.773 Nvme0n1 00:34:28.773 13:55:26 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:34:28.773 13:55:26 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:34:29.032 request: 00:34:29.032 { 00:34:29.032 "bdev_name": "Nvme0n1", 00:34:29.032 "filename": "non_existing_file", 00:34:29.032 "method": "bdev_nvme_apply_firmware", 00:34:29.032 "req_id": 1 00:34:29.032 } 00:34:29.032 Got JSON-RPC error response 00:34:29.032 response: 00:34:29.032 { 00:34:29.032 "code": -32603, 00:34:29.032 "message": "open file failed." 00:34:29.032 } 00:34:29.032 13:55:26 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:34:29.032 13:55:26 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:34:29.032 13:55:26 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:34:29.291 13:55:26 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:34:29.291 13:55:26 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67563 00:34:29.291 13:55:26 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67563 ']' 00:34:29.291 13:55:26 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67563 00:34:29.291 13:55:26 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:34:29.291 13:55:26 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:29.291 13:55:26 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67563 00:34:29.291 13:55:26 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:29.291 killing process with pid 67563 00:34:29.291 13:55:26 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:29.291 13:55:26 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67563' 00:34:29.291 13:55:26 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67563 00:34:29.291 13:55:26 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67563 00:34:31.826 00:34:31.826 real 0m4.891s 00:34:31.826 user 0m9.166s 00:34:31.826 sys 0m0.795s 00:34:31.826 ************************************ 00:34:31.826 END TEST nvme_rpc 00:34:31.826 ************************************ 00:34:31.826 13:55:28 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:31.826 13:55:28 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:34:31.826 13:55:28 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:34:31.826 13:55:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:31.826 13:55:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:31.826 13:55:28 -- common/autotest_common.sh@10 -- # set +x 00:34:31.826 ************************************ 00:34:31.826 START TEST nvme_rpc_timeouts 00:34:31.826 ************************************ 00:34:31.826 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:34:31.826 * Looking for test storage... 00:34:31.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:34:31.826 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:31.826 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:34:31.826 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:32.086 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:32.087 13:55:29 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:34:32.087 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:32.087 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:32.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.087 --rc genhtml_branch_coverage=1 00:34:32.087 --rc genhtml_function_coverage=1 00:34:32.087 --rc genhtml_legend=1 00:34:32.087 --rc geninfo_all_blocks=1 00:34:32.087 --rc geninfo_unexecuted_blocks=1 00:34:32.087 00:34:32.087 ' 00:34:32.087 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:32.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.087 --rc genhtml_branch_coverage=1 00:34:32.087 --rc genhtml_function_coverage=1 00:34:32.087 --rc genhtml_legend=1 00:34:32.087 --rc geninfo_all_blocks=1 00:34:32.087 --rc geninfo_unexecuted_blocks=1 00:34:32.087 00:34:32.087 ' 00:34:32.087 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:32.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.087 --rc genhtml_branch_coverage=1 00:34:32.087 --rc genhtml_function_coverage=1 00:34:32.088 --rc genhtml_legend=1 00:34:32.088 --rc geninfo_all_blocks=1 00:34:32.088 --rc geninfo_unexecuted_blocks=1 00:34:32.088 00:34:32.088 ' 00:34:32.088 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:32.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.088 --rc genhtml_branch_coverage=1 00:34:32.088 --rc genhtml_function_coverage=1 00:34:32.088 --rc genhtml_legend=1 00:34:32.088 --rc geninfo_all_blocks=1 00:34:32.088 --rc geninfo_unexecuted_blocks=1 00:34:32.088 00:34:32.088 ' 00:34:32.088 13:55:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:32.088 13:55:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67646 00:34:32.088 13:55:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67646 00:34:32.088 13:55:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67678 00:34:32.088 13:55:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:34:32.088 13:55:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:34:32.088 13:55:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67678 00:34:32.088 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67678 ']' 00:34:32.088 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.088 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:32.088 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.088 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:32.088 13:55:29 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:34:32.088 [2024-11-20 13:55:29.316738] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:34:32.088 [2024-11-20 13:55:29.316872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67678 ] 00:34:32.348 [2024-11-20 13:55:29.490921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:32.348 [2024-11-20 13:55:29.616561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.348 [2024-11-20 13:55:29.616591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:33.286 Checking default timeout settings: 00:34:33.286 13:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:33.286 13:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:34:33.286 13:55:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:34:33.286 13:55:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:34:33.545 Making settings changes with rpc: 00:34:33.545 13:55:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:34:33.545 13:55:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:34:33.804 Check default vs. modified settings: 00:34:33.804 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:34:33.804 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67646 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67646 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:34:34.063 Setting action_on_timeout is changed as expected. 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67646 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67646 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:34:34.063 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:34:34.320 Setting timeout_us is changed as expected. 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67646 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67646 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:34:34.320 Setting timeout_admin_us is changed as expected. 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:34:34.320 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:34:34.321 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67646 /tmp/settings_modified_67646 00:34:34.321 13:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67678 00:34:34.321 13:55:31 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67678 ']' 00:34:34.321 13:55:31 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67678 00:34:34.321 13:55:31 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:34:34.321 13:55:31 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:34.321 13:55:31 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67678 00:34:34.321 killing process with pid 67678 00:34:34.321 13:55:31 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:34.321 13:55:31 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:34.321 13:55:31 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67678' 00:34:34.321 13:55:31 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67678 00:34:34.321 13:55:31 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67678 00:34:36.868 RPC TIMEOUT SETTING TEST PASSED. 00:34:36.868 13:55:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:34:36.868 00:34:36.868 real 0m5.081s 00:34:36.868 user 0m9.597s 00:34:36.868 sys 0m0.689s 00:34:36.868 13:55:34 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:36.868 ************************************ 00:34:36.868 END TEST nvme_rpc_timeouts 00:34:36.868 ************************************ 00:34:36.868 13:55:34 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:34:36.868 13:55:34 -- spdk/autotest.sh@239 -- # uname -s 00:34:36.868 13:55:34 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:34:36.868 13:55:34 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:34:36.868 13:55:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:36.868 13:55:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:36.868 13:55:34 -- common/autotest_common.sh@10 -- # set +x 00:34:36.868 ************************************ 00:34:36.868 START TEST sw_hotplug 00:34:36.868 ************************************ 00:34:36.868 13:55:34 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:34:37.128 * Looking for test storage... 00:34:37.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:34:37.128 13:55:34 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:37.128 13:55:34 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:34:37.128 13:55:34 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:37.128 13:55:34 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:37.128 13:55:34 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:34:37.128 13:55:34 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:37.128 13:55:34 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:37.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.128 --rc genhtml_branch_coverage=1 00:34:37.128 --rc genhtml_function_coverage=1 00:34:37.128 --rc genhtml_legend=1 00:34:37.128 --rc geninfo_all_blocks=1 00:34:37.128 --rc geninfo_unexecuted_blocks=1 00:34:37.128 00:34:37.128 ' 00:34:37.128 13:55:34 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:37.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.128 --rc genhtml_branch_coverage=1 00:34:37.128 --rc genhtml_function_coverage=1 00:34:37.128 --rc genhtml_legend=1 00:34:37.128 --rc geninfo_all_blocks=1 00:34:37.128 --rc geninfo_unexecuted_blocks=1 00:34:37.128 00:34:37.128 ' 00:34:37.128 13:55:34 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:37.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.128 --rc genhtml_branch_coverage=1 00:34:37.128 --rc genhtml_function_coverage=1 00:34:37.128 --rc genhtml_legend=1 00:34:37.128 --rc geninfo_all_blocks=1 00:34:37.128 --rc geninfo_unexecuted_blocks=1 00:34:37.128 00:34:37.128 ' 00:34:37.128 13:55:34 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:37.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.128 --rc genhtml_branch_coverage=1 00:34:37.128 --rc genhtml_function_coverage=1 00:34:37.128 --rc genhtml_legend=1 00:34:37.128 --rc geninfo_all_blocks=1 00:34:37.128 --rc geninfo_unexecuted_blocks=1 00:34:37.128 00:34:37.128 ' 00:34:37.128 13:55:34 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:37.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:37.697 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:34:37.697 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:34:37.697 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:34:37.697 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:34:37.957 13:55:35 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:34:37.957 13:55:35 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:34:37.957 13:55:35 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:34:37.957 13:55:35 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@233 -- # local class 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@18 -- # local i 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@18 -- # local i 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@18 -- # local i 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@18 -- # local i 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:34:37.957 13:55:35 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:34:37.957 13:55:35 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:34:37.957 13:55:35 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:34:37.957 13:55:35 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:38.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:38.476 Waiting for block devices as requested 00:34:38.735 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:38.735 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:38.735 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:38.993 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:44.314 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:44.314 13:55:41 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:34:44.315 13:55:41 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:44.589 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:34:44.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:44.589 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:34:45.159 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:34:45.419 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:34:45.419 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:34:45.419 13:55:42 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:34:45.419 13:55:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:45.419 13:55:42 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:34:45.419 13:55:42 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:34:45.419 13:55:42 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68559 00:34:45.419 13:55:42 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:34:45.419 13:55:42 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:34:45.419 13:55:42 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:34:45.419 13:55:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:34:45.419 13:55:42 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:34:45.419 13:55:42 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:34:45.419 13:55:42 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:34:45.419 13:55:42 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:34:45.419 13:55:42 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:34:45.419 13:55:42 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:34:45.419 13:55:42 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:34:45.419 13:55:42 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:34:45.419 13:55:42 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:34:45.419 13:55:42 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:34:45.677 Initializing NVMe Controllers 00:34:45.677 Attaching to 0000:00:10.0 00:34:45.677 Attaching to 0000:00:11.0 00:34:45.937 Attached to 0000:00:10.0 00:34:45.937 Attached to 0000:00:11.0 00:34:45.937 Initialization complete. Starting I/O... 00:34:45.937 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:34:45.937 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:34:45.937 00:34:46.872 QEMU NVMe Ctrl (12340 ): 1372 I/Os completed (+1372) 00:34:46.872 QEMU NVMe Ctrl (12341 ): 1375 I/Os completed (+1375) 00:34:46.872 00:34:47.809 QEMU NVMe Ctrl (12340 ): 3284 I/Os completed (+1912) 00:34:47.809 QEMU NVMe Ctrl (12341 ): 3287 I/Os completed (+1912) 00:34:47.809 00:34:48.747 QEMU NVMe Ctrl (12340 ): 5248 I/Os completed (+1964) 00:34:48.747 QEMU NVMe Ctrl (12341 ): 5253 I/Os completed (+1966) 00:34:48.747 00:34:50.124 QEMU NVMe Ctrl (12340 ): 7316 I/Os completed (+2068) 00:34:50.124 QEMU NVMe Ctrl (12341 ): 7321 I/Os completed (+2068) 00:34:50.124 00:34:50.693 QEMU NVMe Ctrl (12340 ): 9340 I/Os completed (+2024) 00:34:50.693 QEMU NVMe Ctrl (12341 ): 9345 I/Os completed (+2024) 00:34:50.693 00:34:51.627 13:55:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:51.627 13:55:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:51.627 13:55:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:51.627 [2024-11-20 13:55:48.736990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:51.627 Controller removed: QEMU NVMe Ctrl (12340 ) 00:34:51.627 [2024-11-20 13:55:48.739010] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 [2024-11-20 13:55:48.739080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 [2024-11-20 13:55:48.739106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 [2024-11-20 13:55:48.739135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:34:51.627 [2024-11-20 13:55:48.742447] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 [2024-11-20 13:55:48.742666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 [2024-11-20 13:55:48.742710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 [2024-11-20 13:55:48.742734] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:34:51.627 EAL: Scan for (pci) bus failed. 00:34:51.627 13:55:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:51.627 13:55:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:51.627 [2024-11-20 13:55:48.773569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:51.627 Controller removed: QEMU NVMe Ctrl (12341 ) 00:34:51.627 [2024-11-20 13:55:48.775739] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 [2024-11-20 13:55:48.775931] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 [2024-11-20 13:55:48.776003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 [2024-11-20 13:55:48.776052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:34:51.627 [2024-11-20 13:55:48.779141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 [2024-11-20 13:55:48.779293] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 [2024-11-20 13:55:48.779326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 [2024-11-20 13:55:48.779347] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.627 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:34:51.627 EAL: Scan for (pci) bus failed. 00:34:51.627 13:55:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:34:51.627 13:55:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:51.886 13:55:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:51.886 13:55:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:51.886 13:55:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:51.886 00:34:51.886 13:55:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:51.886 13:55:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:51.886 13:55:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:51.886 13:55:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:51.886 13:55:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:51.886 Attaching to 0000:00:10.0 00:34:51.886 Attached to 0000:00:10.0 00:34:51.886 13:55:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:51.886 13:55:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:51.886 13:55:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:51.886 Attaching to 0000:00:11.0 00:34:51.886 Attached to 0000:00:11.0 00:34:52.868 QEMU NVMe Ctrl (12340 ): 1688 I/Os completed (+1688) 00:34:52.868 QEMU NVMe Ctrl (12341 ): 1457 I/Os completed (+1457) 00:34:52.868 00:34:53.804 QEMU NVMe Ctrl (12340 ): 3452 I/Os completed (+1764) 00:34:53.804 QEMU NVMe Ctrl (12341 ): 3225 I/Os completed (+1768) 00:34:53.804 00:34:54.741 QEMU NVMe Ctrl (12340 ): 5304 I/Os completed (+1852) 00:34:54.741 QEMU NVMe Ctrl (12341 ): 5077 I/Os completed (+1852) 00:34:54.741 00:34:56.119 QEMU NVMe Ctrl (12340 ): 6936 I/Os completed (+1632) 00:34:56.119 QEMU NVMe Ctrl (12341 ): 6723 I/Os completed (+1646) 00:34:56.119 00:34:56.687 QEMU NVMe Ctrl (12340 ): 8792 I/Os completed (+1856) 00:34:56.687 QEMU NVMe Ctrl (12341 ): 8591 I/Os completed (+1868) 00:34:56.687 00:34:58.065 QEMU NVMe Ctrl (12340 ): 10636 I/Os completed (+1844) 00:34:58.065 QEMU NVMe Ctrl (12341 ): 10439 I/Os completed (+1848) 00:34:58.065 00:34:59.001 QEMU NVMe Ctrl (12340 ): 12680 I/Os completed (+2044) 00:34:59.001 QEMU NVMe Ctrl (12341 ): 12483 I/Os completed (+2044) 00:34:59.001 00:34:59.958 QEMU NVMe Ctrl (12340 ): 14712 I/Os completed (+2032) 00:34:59.958 QEMU NVMe Ctrl (12341 ): 14516 I/Os completed (+2033) 00:34:59.958 00:35:00.893 QEMU NVMe Ctrl (12340 ): 16628 I/Os completed (+1916) 00:35:00.893 QEMU NVMe Ctrl (12341 ): 16432 I/Os completed (+1916) 00:35:00.893 00:35:01.829 QEMU NVMe Ctrl (12340 ): 18580 I/Os completed (+1952) 00:35:01.829 QEMU NVMe Ctrl (12341 ): 18384 I/Os completed (+1952) 00:35:01.829 00:35:02.763 QEMU NVMe Ctrl (12340 ): 20396 I/Os completed (+1816) 00:35:02.763 QEMU NVMe Ctrl (12341 ): 20204 I/Os completed (+1820) 00:35:02.763 00:35:03.698 QEMU NVMe Ctrl (12340 ): 21849 I/Os completed (+1453) 00:35:03.698 QEMU NVMe Ctrl (12341 ): 21706 I/Os completed (+1502) 00:35:03.698 00:35:03.957 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:35:03.957 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:03.957 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:03.957 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:03.957 [2024-11-20 13:56:01.211176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:35:03.957 Controller removed: QEMU NVMe Ctrl (12340 ) 00:35:03.957 [2024-11-20 13:56:01.215520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 [2024-11-20 13:56:01.215845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 [2024-11-20 13:56:01.216095] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 [2024-11-20 13:56:01.216243] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:35:03.957 [2024-11-20 13:56:01.221268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 [2024-11-20 13:56:01.221500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 [2024-11-20 13:56:01.221590] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 [2024-11-20 13:56:01.221697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:03.957 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:03.957 [2024-11-20 13:56:01.245512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:35:03.957 Controller removed: QEMU NVMe Ctrl (12341 ) 00:35:03.957 [2024-11-20 13:56:01.248244] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 [2024-11-20 13:56:01.248446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 [2024-11-20 13:56:01.248647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 [2024-11-20 13:56:01.248812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:35:03.957 [2024-11-20 13:56:01.253000] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 [2024-11-20 13:56:01.253184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 [2024-11-20 13:56:01.253320] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.957 [2024-11-20 13:56:01.253497] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:03.958 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:35:03.958 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:35:04.216 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:04.216 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:04.216 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:35:04.216 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:35:04.216 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:04.216 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:04.216 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:04.216 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:35:04.217 Attaching to 0000:00:10.0 00:35:04.217 Attached to 0000:00:10.0 00:35:04.475 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:35:04.475 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:04.475 13:56:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:35:04.475 Attaching to 0000:00:11.0 00:35:04.475 Attached to 0000:00:11.0 00:35:04.734 QEMU NVMe Ctrl (12340 ): 795 I/Os completed (+795) 00:35:04.734 QEMU NVMe Ctrl (12341 ): 549 I/Os completed (+549) 00:35:04.734 00:35:06.142 QEMU NVMe Ctrl (12340 ): 2307 I/Os completed (+1512) 00:35:06.142 QEMU NVMe Ctrl (12341 ): 2070 I/Os completed (+1521) 00:35:06.142 00:35:06.710 QEMU NVMe Ctrl (12340 ): 4063 I/Os completed (+1756) 00:35:06.710 QEMU NVMe Ctrl (12341 ): 3835 I/Os completed (+1765) 00:35:06.710 00:35:08.089 QEMU NVMe Ctrl (12340 ): 6131 I/Os completed (+2068) 00:35:08.089 QEMU NVMe Ctrl (12341 ): 5903 I/Os completed (+2068) 00:35:08.089 00:35:09.027 QEMU NVMe Ctrl (12340 ): 8271 I/Os completed (+2140) 00:35:09.027 QEMU NVMe Ctrl (12341 ): 8043 I/Os completed (+2140) 00:35:09.027 00:35:09.965 QEMU NVMe Ctrl (12340 ): 10011 I/Os completed (+1740) 00:35:09.965 QEMU NVMe Ctrl (12341 ): 9788 I/Os completed (+1745) 00:35:09.965 00:35:10.901 QEMU NVMe Ctrl (12340 ): 11963 I/Os completed (+1952) 00:35:10.901 QEMU NVMe Ctrl (12341 ): 11743 I/Os completed (+1955) 00:35:10.901 00:35:11.837 QEMU NVMe Ctrl (12340 ): 13963 I/Os completed (+2000) 00:35:11.837 QEMU NVMe Ctrl (12341 ): 13743 I/Os completed (+2000) 00:35:11.837 00:35:12.776 QEMU NVMe Ctrl (12340 ): 15937 I/Os completed (+1974) 00:35:12.776 QEMU NVMe Ctrl (12341 ): 15715 I/Os completed (+1972) 00:35:12.776 00:35:13.713 QEMU NVMe Ctrl (12340 ): 17660 I/Os completed (+1723) 00:35:13.713 QEMU NVMe Ctrl (12341 ): 17441 I/Os completed (+1726) 00:35:13.713 00:35:15.093 QEMU NVMe Ctrl (12340 ): 19580 I/Os completed (+1920) 00:35:15.093 QEMU NVMe Ctrl (12341 ): 19361 I/Os completed (+1920) 00:35:15.093 00:35:16.041 QEMU NVMe Ctrl (12340 ): 21436 I/Os completed (+1856) 00:35:16.041 QEMU NVMe Ctrl (12341 ): 21217 I/Os completed (+1856) 00:35:16.041 00:35:16.301 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:35:16.301 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:16.301 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:16.301 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:16.301 [2024-11-20 13:56:13.616220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:35:16.301 Controller removed: QEMU NVMe Ctrl (12340 ) 00:35:16.301 [2024-11-20 13:56:13.620679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.301 [2024-11-20 13:56:13.620766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.301 [2024-11-20 13:56:13.620798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.301 [2024-11-20 13:56:13.620831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.560 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:35:16.560 [2024-11-20 13:56:13.624201] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.560 [2024-11-20 13:56:13.624408] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.560 [2024-11-20 13:56:13.624445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.560 [2024-11-20 13:56:13.624473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.560 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:16.560 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:16.560 [2024-11-20 13:56:13.653682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:35:16.560 Controller removed: QEMU NVMe Ctrl (12341 ) 00:35:16.560 [2024-11-20 13:56:13.655559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.560 [2024-11-20 13:56:13.655654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.560 [2024-11-20 13:56:13.655705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.560 [2024-11-20 13:56:13.655802] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.560 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:35:16.560 [2024-11-20 13:56:13.658629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.560 [2024-11-20 13:56:13.658675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.560 [2024-11-20 13:56:13.658701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.560 [2024-11-20 13:56:13.658719] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:16.560 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:35:16.560 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:35:16.560 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:35:16.560 EAL: Scan for (pci) bus failed. 00:35:16.560 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:16.560 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:16.560 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:35:16.825 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:35:16.825 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:16.825 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:16.825 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:16.825 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:35:16.825 Attaching to 0000:00:10.0 00:35:16.825 Attached to 0000:00:10.0 00:35:16.825 13:56:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:35:16.825 QEMU NVMe Ctrl (12340 ): 188 I/Os completed (+188) 00:35:16.825 00:35:16.825 13:56:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:16.825 13:56:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:35:16.825 Attaching to 0000:00:11.0 00:35:16.825 Attached to 0000:00:11.0 00:35:16.825 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:35:16.825 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:35:16.825 [2024-11-20 13:56:14.025202] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:35:29.060 13:56:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:35:29.060 13:56:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:29.060 13:56:26 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.29 00:35:29.061 13:56:26 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.29 00:35:29.061 13:56:26 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:35:29.061 13:56:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.29 00:35:29.061 13:56:26 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.29 2 00:35:29.061 remove_attach_helper took 43.29s to complete (handling 2 nvme drive(s)) 13:56:26 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:35:35.627 13:56:32 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68559 00:35:35.628 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68559) - No such process 00:35:35.628 13:56:32 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68559 00:35:35.628 13:56:32 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:35:35.628 13:56:32 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:35:35.628 13:56:32 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:35:35.628 13:56:32 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69103 00:35:35.628 13:56:32 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:35.628 13:56:32 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:35:35.628 13:56:32 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69103 00:35:35.628 13:56:32 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69103 ']' 00:35:35.628 13:56:32 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.628 13:56:32 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.628 13:56:32 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.628 13:56:32 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.628 13:56:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:35.628 [2024-11-20 13:56:32.170997] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:35:35.628 [2024-11-20 13:56:32.171444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69103 ] 00:35:35.628 [2024-11-20 13:56:32.360644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.628 [2024-11-20 13:56:32.505967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.565 13:56:33 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:36.565 13:56:33 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:35:36.565 13:56:33 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:35:36.565 13:56:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.565 13:56:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:36.565 13:56:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.565 13:56:33 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:35:36.565 13:56:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:35:36.565 13:56:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:35:36.565 13:56:33 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:35:36.565 13:56:33 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:35:36.565 13:56:33 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:35:36.565 13:56:33 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:35:36.565 13:56:33 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:35:36.565 13:56:33 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:35:36.565 13:56:33 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:35:36.565 13:56:33 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:35:36.565 13:56:33 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:35:36.565 13:56:33 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:35:43.215 13:56:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:43.215 13:56:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:43.215 13:56:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:43.215 13:56:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:43.215 13:56:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:43.215 13:56:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:35:43.215 13:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:43.215 13:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:43.215 13:56:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:43.215 13:56:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:43.215 13:56:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:43.215 13:56:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.215 13:56:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:43.215 [2024-11-20 13:56:39.663967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:35:43.215 [2024-11-20 13:56:39.666722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:43.215 [2024-11-20 13:56:39.666774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.215 [2024-11-20 13:56:39.666797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.215 [2024-11-20 13:56:39.666827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:43.215 [2024-11-20 13:56:39.666841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.215 [2024-11-20 13:56:39.666857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.215 [2024-11-20 13:56:39.666871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:43.215 [2024-11-20 13:56:39.666886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.215 [2024-11-20 13:56:39.666899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.215 [2024-11-20 13:56:39.666920] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:43.215 [2024-11-20 13:56:39.666932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.215 [2024-11-20 13:56:39.666948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.215 13:56:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.215 13:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:35:43.215 13:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:35:43.215 [2024-11-20 13:56:40.063995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:35:43.215 [2024-11-20 13:56:40.066605] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:43.215 [2024-11-20 13:56:40.066642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.215 [2024-11-20 13:56:40.066662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.215 [2024-11-20 13:56:40.066687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:43.215 [2024-11-20 13:56:40.066702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.215 [2024-11-20 13:56:40.066714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.215 [2024-11-20 13:56:40.066730] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:43.215 [2024-11-20 13:56:40.066742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.215 [2024-11-20 13:56:40.066757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.215 [2024-11-20 13:56:40.066770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:43.215 [2024-11-20 13:56:40.066784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:43.215 [2024-11-20 13:56:40.066795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:43.215 13:56:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.215 13:56:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:43.215 13:56:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:43.215 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:35:43.474 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:35:43.474 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:43.474 13:56:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:55.687 13:56:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.687 13:56:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:55.687 13:56:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:55.687 [2024-11-20 13:56:52.664286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:35:55.687 [2024-11-20 13:56:52.667332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:55.687 [2024-11-20 13:56:52.667511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:55.687 [2024-11-20 13:56:52.667709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.687 [2024-11-20 13:56:52.667912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:55.687 [2024-11-20 13:56:52.668028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:55.687 [2024-11-20 13:56:52.668162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.687 [2024-11-20 13:56:52.668276] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:55.687 [2024-11-20 13:56:52.668396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:55.687 [2024-11-20 13:56:52.668453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.687 [2024-11-20 13:56:52.668665] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:55.687 [2024-11-20 13:56:52.668739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:55.687 [2024-11-20 13:56:52.668802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:55.687 13:56:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.687 13:56:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:55.687 13:56:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:35:55.687 13:56:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:35:55.949 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:35:55.949 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:55.949 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:55.949 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:55.949 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:55.949 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:55.949 13:56:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.949 13:56:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:55.949 13:56:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.221 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:35:56.221 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:35:56.221 [2024-11-20 13:56:53.364307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:35:56.221 [2024-11-20 13:56:53.367534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:56.221 [2024-11-20 13:56:53.367578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.221 [2024-11-20 13:56:53.367605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.221 [2024-11-20 13:56:53.367635] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:56.221 [2024-11-20 13:56:53.367651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.221 [2024-11-20 13:56:53.367664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.221 [2024-11-20 13:56:53.367682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:56.221 [2024-11-20 13:56:53.367694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.221 [2024-11-20 13:56:53.367709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.221 [2024-11-20 13:56:53.367724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:56.221 [2024-11-20 13:56:53.367739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.221 [2024-11-20 13:56:53.367751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.481 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:35:56.481 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:56.481 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:56.481 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:56.481 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:56.481 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:56.481 13:56:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.481 13:56:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:56.481 13:56:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.740 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:35:56.740 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:35:56.740 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:56.740 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:56.740 13:56:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:35:56.740 13:56:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:35:56.999 13:56:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:56.999 13:56:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:56.999 13:56:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:56.999 13:56:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:35:56.999 13:56:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:35:56.999 13:56:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:56.999 13:56:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:09.208 13:57:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.208 13:57:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:09.208 13:57:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:09.208 [2024-11-20 13:57:06.264586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:36:09.208 [2024-11-20 13:57:06.267774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:09.208 [2024-11-20 13:57:06.267826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:36:09.208 [2024-11-20 13:57:06.267845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.208 [2024-11-20 13:57:06.267878] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:09.208 [2024-11-20 13:57:06.267890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:36:09.208 [2024-11-20 13:57:06.267910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.208 [2024-11-20 13:57:06.267923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:09.208 [2024-11-20 13:57:06.267937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:36:09.208 [2024-11-20 13:57:06.267948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.208 [2024-11-20 13:57:06.267964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:09.208 [2024-11-20 13:57:06.267975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:36:09.208 [2024-11-20 13:57:06.267989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:09.208 13:57:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.208 13:57:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:09.208 13:57:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:36:09.208 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:36:09.467 [2024-11-20 13:57:06.664581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:36:09.467 [2024-11-20 13:57:06.667658] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:09.467 [2024-11-20 13:57:06.667702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:36:09.467 [2024-11-20 13:57:06.667724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.467 [2024-11-20 13:57:06.667754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:09.467 [2024-11-20 13:57:06.667769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:36:09.467 [2024-11-20 13:57:06.667781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.467 [2024-11-20 13:57:06.667797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:09.467 [2024-11-20 13:57:06.667808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:36:09.467 [2024-11-20 13:57:06.667827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.467 [2024-11-20 13:57:06.667840] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:09.467 [2024-11-20 13:57:06.667855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:36:09.467 [2024-11-20 13:57:06.667866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.727 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:36:09.727 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:36:09.727 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:36:09.727 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:09.727 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:09.727 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:09.727 13:57:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.727 13:57:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:09.727 13:57:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.727 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:36:09.727 13:57:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:36:09.727 13:57:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:36:09.727 13:57:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:36:09.727 13:57:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:36:09.987 13:57:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:36:09.987 13:57:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:36:09.987 13:57:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:36:09.987 13:57:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:36:09.987 13:57:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:36:09.987 13:57:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:36:09.987 13:57:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:36:09.987 13:57:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.73 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.73 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.73 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.73 2 00:36:22.268 remove_attach_helper took 45.73s to complete (handling 2 nvme drive(s)) 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:36:22.268 13:57:19 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:36:22.268 13:57:19 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:36:28.854 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:36:28.854 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:28.854 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:28.854 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:28.854 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:28.854 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:36:28.854 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:36:28.854 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:36:28.854 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:28.854 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:28.854 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:28.854 13:57:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.854 13:57:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:28.854 [2024-11-20 13:57:25.428423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:36:28.854 [2024-11-20 13:57:25.430887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:28.854 [2024-11-20 13:57:25.431063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:36:28.854 [2024-11-20 13:57:25.431184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.854 [2024-11-20 13:57:25.431311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:28.854 [2024-11-20 13:57:25.431391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:36:28.854 [2024-11-20 13:57:25.431453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.854 [2024-11-20 13:57:25.431604] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:28.854 [2024-11-20 13:57:25.431649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:36:28.854 [2024-11-20 13:57:25.431776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.854 [2024-11-20 13:57:25.431839] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:28.854 [2024-11-20 13:57:25.431945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:36:28.854 [2024-11-20 13:57:25.432020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.854 13:57:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.854 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:36:28.854 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:36:28.854 [2024-11-20 13:57:25.828464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:36:28.854 [2024-11-20 13:57:25.831036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:28.854 [2024-11-20 13:57:25.831204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:36:28.854 [2024-11-20 13:57:25.831236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.854 [2024-11-20 13:57:25.831268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:28.854 [2024-11-20 13:57:25.831285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:36:28.854 [2024-11-20 13:57:25.831298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.854 [2024-11-20 13:57:25.831315] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:28.854 [2024-11-20 13:57:25.831327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:36:28.854 [2024-11-20 13:57:25.831343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.854 [2024-11-20 13:57:25.831358] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:28.855 [2024-11-20 13:57:25.831373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:36:28.855 [2024-11-20 13:57:25.831386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:28.855 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:36:28.855 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:36:28.855 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:36:28.855 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:28.855 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:28.855 13:57:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:28.855 13:57:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.855 13:57:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:28.855 13:57:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.855 13:57:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:36:28.855 13:57:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:36:28.855 13:57:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:36:28.855 13:57:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:36:28.855 13:57:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:36:29.114 13:57:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:36:29.114 13:57:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:36:29.114 13:57:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:36:29.114 13:57:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:36:29.114 13:57:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:36:29.114 13:57:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:36:29.114 13:57:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:36:29.114 13:57:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:41.411 13:57:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.411 13:57:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:41.411 13:57:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:41.411 [2024-11-20 13:57:38.428694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:36:41.411 [2024-11-20 13:57:38.431035] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:41.411 [2024-11-20 13:57:38.431233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.411 [2024-11-20 13:57:38.431375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.411 [2024-11-20 13:57:38.431525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:41.411 [2024-11-20 13:57:38.431675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.411 [2024-11-20 13:57:38.431822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.411 [2024-11-20 13:57:38.431891] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:41.411 [2024-11-20 13:57:38.431950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.411 [2024-11-20 13:57:38.432079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.411 [2024-11-20 13:57:38.432251] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:41.411 [2024-11-20 13:57:38.432389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.411 [2024-11-20 13:57:38.432525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:41.411 13:57:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.411 13:57:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:41.411 13:57:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:36:41.411 13:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:36:41.670 [2024-11-20 13:57:38.928688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:36:41.670 [2024-11-20 13:57:38.930718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:41.670 [2024-11-20 13:57:38.930879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.671 [2024-11-20 13:57:38.931007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.671 [2024-11-20 13:57:38.931079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:41.671 [2024-11-20 13:57:38.931165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.671 [2024-11-20 13:57:38.931226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.671 [2024-11-20 13:57:38.931331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:41.671 [2024-11-20 13:57:38.931371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.671 [2024-11-20 13:57:38.931428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.671 [2024-11-20 13:57:38.931560] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:41.671 [2024-11-20 13:57:38.931612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.671 [2024-11-20 13:57:38.931718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.929 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:36:41.929 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:36:41.929 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:36:41.929 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:41.929 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:41.929 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:41.929 13:57:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.929 13:57:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:41.929 13:57:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.929 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:36:41.929 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:36:41.929 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:36:41.929 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:36:41.929 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:36:42.190 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:36:42.190 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:36:42.190 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:36:42.190 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:36:42.190 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:36:42.190 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:36:42.190 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:36:42.190 13:57:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:54.400 13:57:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.400 13:57:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:54.400 13:57:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:54.400 [2024-11-20 13:57:51.529004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:36:54.400 [2024-11-20 13:57:51.531763] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:54.400 [2024-11-20 13:57:51.531942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:36:54.400 [2024-11-20 13:57:51.532066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.400 [2024-11-20 13:57:51.532157] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:54.400 [2024-11-20 13:57:51.532249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:36:54.400 [2024-11-20 13:57:51.532312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.400 [2024-11-20 13:57:51.532415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:54.400 [2024-11-20 13:57:51.532468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:36:54.400 [2024-11-20 13:57:51.532642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.400 [2024-11-20 13:57:51.532667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:54.400 [2024-11-20 13:57:51.532681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:36:54.400 [2024-11-20 13:57:51.532697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:54.400 13:57:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.400 13:57:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:54.400 13:57:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:36:54.400 13:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:36:54.658 [2024-11-20 13:57:51.929027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:36:54.658 [2024-11-20 13:57:51.931868] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:54.658 [2024-11-20 13:57:51.931919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:36:54.658 [2024-11-20 13:57:51.931947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.658 [2024-11-20 13:57:51.931981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:54.658 [2024-11-20 13:57:51.931999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:36:54.658 [2024-11-20 13:57:51.932015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.658 [2024-11-20 13:57:51.932046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:54.658 [2024-11-20 13:57:51.932061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:36:54.658 [2024-11-20 13:57:51.932079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.658 [2024-11-20 13:57:51.932106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:54.658 [2024-11-20 13:57:51.932129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:36:54.658 [2024-11-20 13:57:51.932143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.917 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:36:54.917 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:36:54.917 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:36:54.917 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:54.917 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:54.917 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:54.917 13:57:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.917 13:57:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:54.917 13:57:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.917 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:36:54.917 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:36:55.175 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:36:55.175 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:36:55.175 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:36:55.175 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:36:55.175 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:36:55.175 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:36:55.175 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:36:55.175 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:36:55.175 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:36:55.433 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:36:55.433 13:57:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:37:07.644 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:37:07.644 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:37:07.644 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:37:07.644 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.644 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:37:07.644 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.644 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:37:07.644 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.23 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.23 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:37:07.644 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.23 00:37:07.644 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.23 2 00:37:07.644 remove_attach_helper took 45.23s to complete (handling 2 nvme drive(s)) 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:37:07.644 13:58:04 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69103 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69103 ']' 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69103 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69103 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69103' 00:37:07.644 killing process with pid 69103 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69103 00:37:07.644 13:58:04 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69103 00:37:10.208 13:58:07 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:10.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:11.341 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:37:11.341 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:37:11.341 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:37:11.341 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:37:11.341 00:37:11.341 real 2m34.466s 00:37:11.341 user 1m52.354s 00:37:11.341 sys 0m22.700s 00:37:11.341 13:58:08 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:11.341 13:58:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:37:11.341 ************************************ 00:37:11.341 END TEST sw_hotplug 00:37:11.341 ************************************ 00:37:11.341 13:58:08 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:37:11.341 13:58:08 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:37:11.341 13:58:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:11.341 13:58:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:11.341 13:58:08 -- common/autotest_common.sh@10 -- # set +x 00:37:11.603 ************************************ 00:37:11.603 START TEST nvme_xnvme 00:37:11.603 ************************************ 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:37:11.603 * Looking for test storage... 00:37:11.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:11.603 13:58:08 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:11.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.603 --rc genhtml_branch_coverage=1 00:37:11.603 --rc genhtml_function_coverage=1 00:37:11.603 --rc genhtml_legend=1 00:37:11.603 --rc geninfo_all_blocks=1 00:37:11.603 --rc geninfo_unexecuted_blocks=1 00:37:11.603 00:37:11.603 ' 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:11.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.603 --rc genhtml_branch_coverage=1 00:37:11.603 --rc genhtml_function_coverage=1 00:37:11.603 --rc genhtml_legend=1 00:37:11.603 --rc geninfo_all_blocks=1 00:37:11.603 --rc geninfo_unexecuted_blocks=1 00:37:11.603 00:37:11.603 ' 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:11.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.603 --rc genhtml_branch_coverage=1 00:37:11.603 --rc genhtml_function_coverage=1 00:37:11.603 --rc genhtml_legend=1 00:37:11.603 --rc geninfo_all_blocks=1 00:37:11.603 --rc geninfo_unexecuted_blocks=1 00:37:11.603 00:37:11.603 ' 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:11.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.603 --rc genhtml_branch_coverage=1 00:37:11.603 --rc genhtml_function_coverage=1 00:37:11.603 --rc genhtml_legend=1 00:37:11.603 --rc geninfo_all_blocks=1 00:37:11.603 --rc geninfo_unexecuted_blocks=1 00:37:11.603 00:37:11.603 ' 00:37:11.603 13:58:08 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:37:11.603 13:58:08 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:37:11.603 13:58:08 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:37:11.604 13:58:08 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:37:11.604 13:58:08 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:37:11.604 13:58:08 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:37:11.604 13:58:08 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:37:11.604 13:58:08 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:37:11.604 #define SPDK_CONFIG_H 00:37:11.604 #define SPDK_CONFIG_AIO_FSDEV 1 00:37:11.604 #define SPDK_CONFIG_APPS 1 00:37:11.604 #define SPDK_CONFIG_ARCH native 00:37:11.604 #define SPDK_CONFIG_ASAN 1 00:37:11.605 #undef SPDK_CONFIG_AVAHI 00:37:11.605 #undef SPDK_CONFIG_CET 00:37:11.605 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:37:11.605 #define SPDK_CONFIG_COVERAGE 1 00:37:11.605 #define SPDK_CONFIG_CROSS_PREFIX 00:37:11.605 #undef SPDK_CONFIG_CRYPTO 00:37:11.605 #undef SPDK_CONFIG_CRYPTO_MLX5 00:37:11.605 #undef SPDK_CONFIG_CUSTOMOCF 00:37:11.605 #undef SPDK_CONFIG_DAOS 00:37:11.605 #define SPDK_CONFIG_DAOS_DIR 00:37:11.605 #define SPDK_CONFIG_DEBUG 1 00:37:11.605 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:37:11.605 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:37:11.605 #define SPDK_CONFIG_DPDK_INC_DIR 00:37:11.605 #define SPDK_CONFIG_DPDK_LIB_DIR 00:37:11.605 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:37:11.605 #undef SPDK_CONFIG_DPDK_UADK 00:37:11.605 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:37:11.605 #define SPDK_CONFIG_EXAMPLES 1 00:37:11.605 #undef SPDK_CONFIG_FC 00:37:11.605 #define SPDK_CONFIG_FC_PATH 00:37:11.605 #define SPDK_CONFIG_FIO_PLUGIN 1 00:37:11.605 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:37:11.605 #define SPDK_CONFIG_FSDEV 1 00:37:11.605 #undef SPDK_CONFIG_FUSE 00:37:11.605 #undef SPDK_CONFIG_FUZZER 00:37:11.605 #define SPDK_CONFIG_FUZZER_LIB 00:37:11.605 #undef SPDK_CONFIG_GOLANG 00:37:11.605 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:37:11.605 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:37:11.605 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:37:11.605 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:37:11.605 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:37:11.605 #undef SPDK_CONFIG_HAVE_LIBBSD 00:37:11.605 #undef SPDK_CONFIG_HAVE_LZ4 00:37:11.605 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:37:11.605 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:37:11.605 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:37:11.605 #define SPDK_CONFIG_IDXD 1 00:37:11.605 #define SPDK_CONFIG_IDXD_KERNEL 1 00:37:11.605 #undef SPDK_CONFIG_IPSEC_MB 00:37:11.605 #define SPDK_CONFIG_IPSEC_MB_DIR 00:37:11.605 #define SPDK_CONFIG_ISAL 1 00:37:11.605 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:37:11.605 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:37:11.605 #define SPDK_CONFIG_LIBDIR 00:37:11.605 #undef SPDK_CONFIG_LTO 00:37:11.605 #define SPDK_CONFIG_MAX_LCORES 128 00:37:11.605 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:37:11.605 #define SPDK_CONFIG_NVME_CUSE 1 00:37:11.605 #undef SPDK_CONFIG_OCF 00:37:11.605 #define SPDK_CONFIG_OCF_PATH 00:37:11.605 #define SPDK_CONFIG_OPENSSL_PATH 00:37:11.605 #undef SPDK_CONFIG_PGO_CAPTURE 00:37:11.605 #define SPDK_CONFIG_PGO_DIR 00:37:11.605 #undef SPDK_CONFIG_PGO_USE 00:37:11.605 #define SPDK_CONFIG_PREFIX /usr/local 00:37:11.605 #undef SPDK_CONFIG_RAID5F 00:37:11.605 #undef SPDK_CONFIG_RBD 00:37:11.605 #define SPDK_CONFIG_RDMA 1 00:37:11.605 #define SPDK_CONFIG_RDMA_PROV verbs 00:37:11.605 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:37:11.605 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:37:11.605 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:37:11.605 #define SPDK_CONFIG_SHARED 1 00:37:11.605 #undef SPDK_CONFIG_SMA 00:37:11.605 #define SPDK_CONFIG_TESTS 1 00:37:11.605 #undef SPDK_CONFIG_TSAN 00:37:11.605 #define SPDK_CONFIG_UBLK 1 00:37:11.605 #define SPDK_CONFIG_UBSAN 1 00:37:11.605 #undef SPDK_CONFIG_UNIT_TESTS 00:37:11.605 #undef SPDK_CONFIG_URING 00:37:11.605 #define SPDK_CONFIG_URING_PATH 00:37:11.605 #undef SPDK_CONFIG_URING_ZNS 00:37:11.605 #undef SPDK_CONFIG_USDT 00:37:11.605 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:37:11.605 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:37:11.605 #undef SPDK_CONFIG_VFIO_USER 00:37:11.605 #define SPDK_CONFIG_VFIO_USER_DIR 00:37:11.605 #define SPDK_CONFIG_VHOST 1 00:37:11.605 #define SPDK_CONFIG_VIRTIO 1 00:37:11.605 #undef SPDK_CONFIG_VTUNE 00:37:11.605 #define SPDK_CONFIG_VTUNE_DIR 00:37:11.605 #define SPDK_CONFIG_WERROR 1 00:37:11.605 #define SPDK_CONFIG_WPDK_DIR 00:37:11.605 #define SPDK_CONFIG_XNVME 1 00:37:11.605 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:37:11.605 13:58:08 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:37:11.605 13:58:08 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:11.605 13:58:08 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:37:11.605 13:58:08 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:11.605 13:58:08 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:11.605 13:58:08 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:11.605 13:58:08 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.605 13:58:08 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.605 13:58:08 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.605 13:58:08 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:37:11.605 13:58:08 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.605 13:58:08 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:37:11.605 13:58:08 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@68 -- # uname -s 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:37:11.867 13:58:08 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:37:11.867 13:58:08 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:37:11.868 13:58:08 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70463 ]] 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70463 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:37:11.869 13:58:08 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.2ZHYss 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.2ZHYss/tests/xnvme /tmp/spdk.2ZHYss 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975543808 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592244224 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:37:11.869 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975543808 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592244224 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=96271351808 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3431428096 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:37:11.870 * Looking for test storage... 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975543808 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:37:11.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:11.870 13:58:09 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:11.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.870 --rc genhtml_branch_coverage=1 00:37:11.870 --rc genhtml_function_coverage=1 00:37:11.870 --rc genhtml_legend=1 00:37:11.870 --rc geninfo_all_blocks=1 00:37:11.870 --rc geninfo_unexecuted_blocks=1 00:37:11.870 00:37:11.870 ' 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:11.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.870 --rc genhtml_branch_coverage=1 00:37:11.870 --rc genhtml_function_coverage=1 00:37:11.870 --rc genhtml_legend=1 00:37:11.870 --rc geninfo_all_blocks=1 00:37:11.870 --rc geninfo_unexecuted_blocks=1 00:37:11.870 00:37:11.870 ' 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:11.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.870 --rc genhtml_branch_coverage=1 00:37:11.870 --rc genhtml_function_coverage=1 00:37:11.870 --rc genhtml_legend=1 00:37:11.870 --rc geninfo_all_blocks=1 00:37:11.870 --rc geninfo_unexecuted_blocks=1 00:37:11.870 00:37:11.870 ' 00:37:11.870 13:58:09 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:11.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.870 --rc genhtml_branch_coverage=1 00:37:11.870 --rc genhtml_function_coverage=1 00:37:11.870 --rc genhtml_legend=1 00:37:11.871 --rc geninfo_all_blocks=1 00:37:11.871 --rc geninfo_unexecuted_blocks=1 00:37:11.871 00:37:11.871 ' 00:37:11.871 13:58:09 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:11.871 13:58:09 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:37:11.871 13:58:09 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:11.871 13:58:09 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:11.871 13:58:09 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:11.871 13:58:09 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.871 13:58:09 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.871 13:58:09 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.871 13:58:09 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:37:11.871 13:58:09 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.871 13:58:09 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:37:11.871 13:58:09 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:37:11.871 13:58:09 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:37:11.871 13:58:09 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:37:11.871 13:58:09 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:37:11.871 13:58:09 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:37:11.871 13:58:09 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:37:11.871 13:58:09 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:37:11.871 13:58:09 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:37:11.871 13:58:09 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:37:12.129 13:58:09 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:37:12.129 13:58:09 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:37:12.129 13:58:09 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:37:12.129 13:58:09 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:37:12.129 13:58:09 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:37:12.129 13:58:09 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:37:12.129 13:58:09 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:37:12.129 13:58:09 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:37:12.129 13:58:09 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:37:12.129 13:58:09 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:37:12.129 13:58:09 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:37:12.129 13:58:09 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:12.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:12.647 Waiting for block devices as requested 00:37:12.647 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:12.906 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:12.906 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:37:13.165 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:37:18.434 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:37:18.434 13:58:15 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:37:18.693 13:58:15 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:37:18.693 13:58:15 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:37:18.693 13:58:15 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:37:18.693 13:58:15 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:37:18.693 13:58:15 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:37:18.693 13:58:15 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:37:18.693 13:58:15 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:37:18.952 No valid GPT data, bailing 00:37:18.952 13:58:16 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:18.952 13:58:16 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:37:18.952 13:58:16 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:37:18.952 13:58:16 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:37:18.952 13:58:16 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:37:18.952 13:58:16 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:37:18.952 13:58:16 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:37:18.952 13:58:16 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:37:18.952 13:58:16 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:37:18.952 13:58:16 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:37:18.952 13:58:16 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:37:18.952 13:58:16 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:37:18.952 13:58:16 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:37:18.952 13:58:16 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:37:18.952 13:58:16 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:37:18.952 13:58:16 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:37:18.952 13:58:16 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:37:18.952 13:58:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:18.952 13:58:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:18.952 13:58:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:18.952 ************************************ 00:37:18.952 START TEST xnvme_rpc 00:37:18.952 ************************************ 00:37:18.952 13:58:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:37:18.952 13:58:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:37:18.952 13:58:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:37:18.952 13:58:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:37:18.952 13:58:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:37:18.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:18.952 13:58:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70859 00:37:18.952 13:58:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70859 00:37:18.952 13:58:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:18.952 13:58:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70859 ']' 00:37:18.952 13:58:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:18.952 13:58:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:18.952 13:58:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:18.952 13:58:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:18.952 13:58:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:18.952 [2024-11-20 13:58:16.231174] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:37:18.952 [2024-11-20 13:58:16.231619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70859 ] 00:37:19.212 [2024-11-20 13:58:16.435353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:19.471 [2024-11-20 13:58:16.602071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:20.409 xnvme_bdev 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:20.409 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:37:20.668 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.668 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70859 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70859 ']' 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70859 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70859 00:37:20.669 killing process with pid 70859 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70859' 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70859 00:37:20.669 13:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70859 00:37:23.201 00:37:23.201 real 0m4.323s 00:37:23.201 user 0m4.363s 00:37:23.201 sys 0m0.611s 00:37:23.201 13:58:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:23.201 ************************************ 00:37:23.201 END TEST xnvme_rpc 00:37:23.201 ************************************ 00:37:23.201 13:58:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:23.201 13:58:20 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:37:23.201 13:58:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:23.201 13:58:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:23.201 13:58:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:23.201 ************************************ 00:37:23.201 START TEST xnvme_bdevperf 00:37:23.201 ************************************ 00:37:23.201 13:58:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:37:23.201 13:58:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:37:23.201 13:58:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:37:23.201 13:58:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:23.201 13:58:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:37:23.201 13:58:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:37:23.201 13:58:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:37:23.201 13:58:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:23.460 { 00:37:23.460 "subsystems": [ 00:37:23.460 { 00:37:23.460 "subsystem": "bdev", 00:37:23.460 "config": [ 00:37:23.460 { 00:37:23.460 "params": { 00:37:23.460 "io_mechanism": "libaio", 00:37:23.460 "conserve_cpu": false, 00:37:23.460 "filename": "/dev/nvme0n1", 00:37:23.460 "name": "xnvme_bdev" 00:37:23.460 }, 00:37:23.460 "method": "bdev_xnvme_create" 00:37:23.460 }, 00:37:23.460 { 00:37:23.460 "method": "bdev_wait_for_examine" 00:37:23.461 } 00:37:23.461 ] 00:37:23.461 } 00:37:23.461 ] 00:37:23.461 } 00:37:23.461 [2024-11-20 13:58:20.599384] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:37:23.461 [2024-11-20 13:58:20.599575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70944 ] 00:37:23.719 [2024-11-20 13:58:20.792868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.719 [2024-11-20 13:58:20.920019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.288 Running I/O for 5 seconds... 00:37:26.160 25927.00 IOPS, 101.28 MiB/s [2024-11-20T13:58:24.419Z] 24667.50 IOPS, 96.36 MiB/s [2024-11-20T13:58:25.354Z] 23826.33 IOPS, 93.07 MiB/s [2024-11-20T13:58:26.730Z] 23039.00 IOPS, 90.00 MiB/s [2024-11-20T13:58:26.730Z] 22612.00 IOPS, 88.33 MiB/s 00:37:29.407 Latency(us) 00:37:29.407 [2024-11-20T13:58:26.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:29.407 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:37:29.407 xnvme_bdev : 5.01 22579.10 88.20 0.00 0.00 2827.32 376.44 7833.11 00:37:29.407 [2024-11-20T13:58:26.730Z] =================================================================================================================== 00:37:29.407 [2024-11-20T13:58:26.730Z] Total : 22579.10 88.20 0.00 0.00 2827.32 376.44 7833.11 00:37:30.784 13:58:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:30.784 13:58:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:37:30.784 13:58:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:37:30.784 13:58:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:37:30.784 13:58:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:30.784 { 00:37:30.784 "subsystems": [ 00:37:30.784 { 00:37:30.784 "subsystem": "bdev", 00:37:30.784 "config": [ 00:37:30.784 { 00:37:30.784 "params": { 00:37:30.784 "io_mechanism": "libaio", 00:37:30.784 "conserve_cpu": false, 00:37:30.784 "filename": "/dev/nvme0n1", 00:37:30.784 "name": "xnvme_bdev" 00:37:30.784 }, 00:37:30.784 "method": "bdev_xnvme_create" 00:37:30.784 }, 00:37:30.784 { 00:37:30.784 "method": "bdev_wait_for_examine" 00:37:30.784 } 00:37:30.784 ] 00:37:30.784 } 00:37:30.784 ] 00:37:30.784 } 00:37:30.784 [2024-11-20 13:58:27.850114] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:37:30.784 [2024-11-20 13:58:27.850297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71026 ] 00:37:30.784 [2024-11-20 13:58:28.058331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:31.043 [2024-11-20 13:58:28.228471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:31.610 Running I/O for 5 seconds... 00:37:33.524 21762.00 IOPS, 85.01 MiB/s [2024-11-20T13:58:31.783Z] 21232.00 IOPS, 82.94 MiB/s [2024-11-20T13:58:32.720Z] 22060.00 IOPS, 86.17 MiB/s [2024-11-20T13:58:34.098Z] 22011.50 IOPS, 85.98 MiB/s 00:37:36.775 Latency(us) 00:37:36.775 [2024-11-20T13:58:34.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.775 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:37:36.775 xnvme_bdev : 5.00 21818.86 85.23 0.00 0.00 2926.75 302.32 9674.36 00:37:36.775 [2024-11-20T13:58:34.098Z] =================================================================================================================== 00:37:36.775 [2024-11-20T13:58:34.098Z] Total : 21818.86 85.23 0.00 0.00 2926.75 302.32 9674.36 00:37:37.761 00:37:37.761 real 0m14.589s 00:37:37.761 user 0m5.466s 00:37:37.761 sys 0m6.257s 00:37:37.761 13:58:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:37.761 13:58:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:37.761 ************************************ 00:37:37.761 END TEST xnvme_bdevperf 00:37:37.761 ************************************ 00:37:38.020 13:58:35 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:37:38.020 13:58:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:38.020 13:58:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:38.020 13:58:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:38.020 ************************************ 00:37:38.020 START TEST xnvme_fio_plugin 00:37:38.020 ************************************ 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:38.020 13:58:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:38.020 { 00:37:38.020 "subsystems": [ 00:37:38.020 { 00:37:38.020 "subsystem": "bdev", 00:37:38.020 "config": [ 00:37:38.020 { 00:37:38.020 "params": { 00:37:38.020 "io_mechanism": "libaio", 00:37:38.020 "conserve_cpu": false, 00:37:38.020 "filename": "/dev/nvme0n1", 00:37:38.020 "name": "xnvme_bdev" 00:37:38.020 }, 00:37:38.020 "method": "bdev_xnvme_create" 00:37:38.020 }, 00:37:38.020 { 00:37:38.020 "method": "bdev_wait_for_examine" 00:37:38.020 } 00:37:38.020 ] 00:37:38.020 } 00:37:38.020 ] 00:37:38.020 } 00:37:38.279 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:37:38.279 fio-3.35 00:37:38.279 Starting 1 thread 00:37:44.846 00:37:44.846 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71164: Wed Nov 20 13:58:41 2024 00:37:44.846 read: IOPS=22.3k, BW=86.9MiB/s (91.2MB/s)(435MiB/5001msec) 00:37:44.846 slat (usec): min=5, max=1565, avg=40.82, stdev=52.20 00:37:44.846 clat (usec): min=103, max=7570, avg=1616.96, stdev=874.60 00:37:44.846 lat (usec): min=203, max=7678, avg=1657.78, stdev=875.71 00:37:44.846 clat percentiles (usec): 00:37:44.846 | 1.00th=[ 273], 5.00th=[ 408], 10.00th=[ 545], 20.00th=[ 799], 00:37:44.846 | 30.00th=[ 1045], 40.00th=[ 1287], 50.00th=[ 1532], 60.00th=[ 1778], 00:37:44.846 | 70.00th=[ 2040], 80.00th=[ 2376], 90.00th=[ 2737], 95.00th=[ 3064], 00:37:44.846 | 99.00th=[ 4146], 99.50th=[ 4686], 99.90th=[ 5604], 99.95th=[ 5932], 00:37:44.846 | 99.99th=[ 6652] 00:37:44.846 bw ( KiB/s): min=75560, max=122472, per=100.00%, avg=90324.33, stdev=15066.35, samples=9 00:37:44.846 iops : min=18890, max=30618, avg=22581.00, stdev=3766.65, samples=9 00:37:44.846 lat (usec) : 250=0.65%, 500=7.69%, 750=9.47%, 1000=10.51% 00:37:44.846 lat (msec) : 2=40.16%, 4=30.30%, 10=1.21% 00:37:44.846 cpu : usr=16.98%, sys=63.08%, ctx=34940, majf=0, minf=764 00:37:44.846 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=12.1%, 16=25.9%, 32=53.9%, >=64=1.7% 00:37:44.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.846 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:37:44.846 issued rwts: total=111302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.846 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:44.846 00:37:44.846 Run status group 0 (all jobs): 00:37:44.846 READ: bw=86.9MiB/s (91.2MB/s), 86.9MiB/s-86.9MiB/s (91.2MB/s-91.2MB/s), io=435MiB (456MB), run=5001-5001msec 00:37:45.783 ----------------------------------------------------- 00:37:45.783 Suppressions used: 00:37:45.783 count bytes template 00:37:45.783 1 11 /usr/src/fio/parse.c 00:37:45.783 1 8 libtcmalloc_minimal.so 00:37:45.783 1 904 libcrypto.so 00:37:45.783 ----------------------------------------------------- 00:37:45.783 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:45.783 13:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:45.783 { 00:37:45.783 "subsystems": [ 00:37:45.783 { 00:37:45.783 "subsystem": "bdev", 00:37:45.783 "config": [ 00:37:45.783 { 00:37:45.783 "params": { 00:37:45.783 "io_mechanism": "libaio", 00:37:45.783 "conserve_cpu": false, 00:37:45.783 "filename": "/dev/nvme0n1", 00:37:45.783 "name": "xnvme_bdev" 00:37:45.783 }, 00:37:45.783 "method": "bdev_xnvme_create" 00:37:45.783 }, 00:37:45.783 { 00:37:45.783 "method": "bdev_wait_for_examine" 00:37:45.783 } 00:37:45.784 ] 00:37:45.784 } 00:37:45.784 ] 00:37:45.784 } 00:37:46.043 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:37:46.043 fio-3.35 00:37:46.043 Starting 1 thread 00:37:52.621 00:37:52.621 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71256: Wed Nov 20 13:58:48 2024 00:37:52.621 write: IOPS=24.2k, BW=94.5MiB/s (99.1MB/s)(473MiB/5001msec); 0 zone resets 00:37:52.621 slat (usec): min=5, max=1705, avg=37.50, stdev=50.36 00:37:52.621 clat (usec): min=90, max=6075, avg=1492.95, stdev=776.63 00:37:52.621 lat (usec): min=136, max=6202, avg=1530.45, stdev=777.11 00:37:52.621 clat percentiles (usec): 00:37:52.621 | 1.00th=[ 269], 5.00th=[ 396], 10.00th=[ 515], 20.00th=[ 750], 00:37:52.621 | 30.00th=[ 979], 40.00th=[ 1205], 50.00th=[ 1434], 60.00th=[ 1663], 00:37:52.621 | 70.00th=[ 1893], 80.00th=[ 2180], 90.00th=[ 2507], 95.00th=[ 2769], 00:37:52.621 | 99.00th=[ 3589], 99.50th=[ 4178], 99.90th=[ 4948], 99.95th=[ 5211], 00:37:52.621 | 99.99th=[ 5538] 00:37:52.621 bw ( KiB/s): min=87224, max=118456, per=100.00%, avg=97361.78, stdev=8691.24, samples=9 00:37:52.621 iops : min=21806, max=29614, avg=24340.22, stdev=2172.94, samples=9 00:37:52.621 lat (usec) : 100=0.01%, 250=0.67%, 500=8.71%, 750=10.51%, 1000=11.10% 00:37:52.621 lat (msec) : 2=43.05%, 4=25.33%, 10=0.62% 00:37:52.621 cpu : usr=19.78%, sys=60.78%, ctx=67, majf=0, minf=764 00:37:52.621 IO depths : 1=0.1%, 2=1.0%, 4=4.8%, 8=12.3%, 16=26.2%, 32=53.9%, >=64=1.7% 00:37:52.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.621 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:37:52.621 issued rwts: total=0,121020,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.621 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:52.621 00:37:52.621 Run status group 0 (all jobs): 00:37:52.621 WRITE: bw=94.5MiB/s (99.1MB/s), 94.5MiB/s-94.5MiB/s (99.1MB/s-99.1MB/s), io=473MiB (496MB), run=5001-5001msec 00:37:53.190 ----------------------------------------------------- 00:37:53.190 Suppressions used: 00:37:53.190 count bytes template 00:37:53.190 1 11 /usr/src/fio/parse.c 00:37:53.190 1 8 libtcmalloc_minimal.so 00:37:53.190 1 904 libcrypto.so 00:37:53.190 ----------------------------------------------------- 00:37:53.190 00:37:53.190 00:37:53.190 real 0m15.240s 00:37:53.190 user 0m5.957s 00:37:53.190 sys 0m6.975s 00:37:53.190 13:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:53.190 ************************************ 00:37:53.190 END TEST xnvme_fio_plugin 00:37:53.190 ************************************ 00:37:53.190 13:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:53.190 13:58:50 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:37:53.190 13:58:50 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:37:53.190 13:58:50 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:37:53.190 13:58:50 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:37:53.190 13:58:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:53.190 13:58:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:53.190 13:58:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:53.190 ************************************ 00:37:53.190 START TEST xnvme_rpc 00:37:53.190 ************************************ 00:37:53.190 13:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:37:53.190 13:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:37:53.190 13:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:37:53.190 13:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:37:53.190 13:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:37:53.190 13:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71342 00:37:53.190 13:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:53.190 13:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71342 00:37:53.190 13:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71342 ']' 00:37:53.190 13:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:53.190 13:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:53.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:53.190 13:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:53.190 13:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:53.190 13:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:53.449 [2024-11-20 13:58:50.559415] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:37:53.449 [2024-11-20 13:58:50.560354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71342 ] 00:37:53.449 [2024-11-20 13:58:50.744959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.708 [2024-11-20 13:58:50.862498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:54.647 xnvme_bdev 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:54.647 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71342 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71342 ']' 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71342 00:37:54.648 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:37:54.907 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:54.907 13:58:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71342 00:37:54.907 killing process with pid 71342 00:37:54.907 13:58:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:54.907 13:58:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:54.907 13:58:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71342' 00:37:54.907 13:58:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71342 00:37:54.907 13:58:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71342 00:37:57.438 00:37:57.438 real 0m4.184s 00:37:57.438 user 0m4.234s 00:37:57.438 sys 0m0.563s 00:37:57.438 ************************************ 00:37:57.438 END TEST xnvme_rpc 00:37:57.438 ************************************ 00:37:57.438 13:58:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:57.438 13:58:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:57.438 13:58:54 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:37:57.438 13:58:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:57.438 13:58:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:57.438 13:58:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:57.438 ************************************ 00:37:57.438 START TEST xnvme_bdevperf 00:37:57.438 ************************************ 00:37:57.438 13:58:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:37:57.438 13:58:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:37:57.438 13:58:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:37:57.438 13:58:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:57.438 13:58:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:37:57.438 13:58:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:37:57.438 13:58:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:37:57.438 13:58:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:57.438 { 00:37:57.438 "subsystems": [ 00:37:57.438 { 00:37:57.438 "subsystem": "bdev", 00:37:57.438 "config": [ 00:37:57.438 { 00:37:57.438 "params": { 00:37:57.438 "io_mechanism": "libaio", 00:37:57.438 "conserve_cpu": true, 00:37:57.438 "filename": "/dev/nvme0n1", 00:37:57.438 "name": "xnvme_bdev" 00:37:57.438 }, 00:37:57.438 "method": "bdev_xnvme_create" 00:37:57.438 }, 00:37:57.438 { 00:37:57.438 "method": "bdev_wait_for_examine" 00:37:57.438 } 00:37:57.438 ] 00:37:57.438 } 00:37:57.438 ] 00:37:57.438 } 00:37:57.697 [2024-11-20 13:58:54.783327] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:37:57.697 [2024-11-20 13:58:54.783728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71433 ] 00:37:57.697 [2024-11-20 13:58:54.973608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.955 [2024-11-20 13:58:55.101702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:58.521 Running I/O for 5 seconds... 00:38:00.387 20439.00 IOPS, 79.84 MiB/s [2024-11-20T13:58:58.642Z] 20643.00 IOPS, 80.64 MiB/s [2024-11-20T13:58:59.577Z] 21863.33 IOPS, 85.40 MiB/s [2024-11-20T13:59:00.950Z] 22088.00 IOPS, 86.28 MiB/s 00:38:03.627 Latency(us) 00:38:03.627 [2024-11-20T13:59:00.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.627 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:38:03.627 xnvme_bdev : 5.00 21855.28 85.37 0.00 0.00 2922.11 243.81 6834.47 00:38:03.627 [2024-11-20T13:59:00.950Z] =================================================================================================================== 00:38:03.627 [2024-11-20T13:59:00.950Z] Total : 21855.28 85.37 0.00 0.00 2922.11 243.81 6834.47 00:38:04.562 13:59:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:04.562 13:59:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:38:04.562 13:59:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:04.562 13:59:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:04.562 13:59:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:04.562 { 00:38:04.562 "subsystems": [ 00:38:04.562 { 00:38:04.562 "subsystem": "bdev", 00:38:04.562 "config": [ 00:38:04.562 { 00:38:04.562 "params": { 00:38:04.562 "io_mechanism": "libaio", 00:38:04.562 "conserve_cpu": true, 00:38:04.562 "filename": "/dev/nvme0n1", 00:38:04.562 "name": "xnvme_bdev" 00:38:04.562 }, 00:38:04.562 "method": "bdev_xnvme_create" 00:38:04.562 }, 00:38:04.562 { 00:38:04.562 "method": "bdev_wait_for_examine" 00:38:04.562 } 00:38:04.562 ] 00:38:04.562 } 00:38:04.562 ] 00:38:04.562 } 00:38:04.562 [2024-11-20 13:59:01.879173] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:04.562 [2024-11-20 13:59:01.879836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71508 ] 00:38:04.821 [2024-11-20 13:59:02.088394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:05.079 [2024-11-20 13:59:02.211678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:05.338 Running I/O for 5 seconds... 00:38:07.281 21387.00 IOPS, 83.54 MiB/s [2024-11-20T13:59:05.979Z] 22168.50 IOPS, 86.60 MiB/s [2024-11-20T13:59:06.913Z] 23923.33 IOPS, 93.45 MiB/s [2024-11-20T13:59:07.847Z] 25198.50 IOPS, 98.43 MiB/s 00:38:10.524 Latency(us) 00:38:10.524 [2024-11-20T13:59:07.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:10.524 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:38:10.524 xnvme_bdev : 5.00 24724.68 96.58 0.00 0.00 2583.13 261.36 18974.23 00:38:10.524 [2024-11-20T13:59:07.847Z] =================================================================================================================== 00:38:10.524 [2024-11-20T13:59:07.847Z] Total : 24724.68 96.58 0.00 0.00 2583.13 261.36 18974.23 00:38:11.901 00:38:11.901 real 0m14.150s 00:38:11.901 user 0m4.949s 00:38:11.901 sys 0m6.764s 00:38:11.901 ************************************ 00:38:11.901 END TEST xnvme_bdevperf 00:38:11.901 ************************************ 00:38:11.901 13:59:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:11.901 13:59:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:11.901 13:59:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:38:11.901 13:59:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:11.901 13:59:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:11.901 13:59:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:11.901 ************************************ 00:38:11.901 START TEST xnvme_fio_plugin 00:38:11.901 ************************************ 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:11.901 13:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:11.901 { 00:38:11.901 "subsystems": [ 00:38:11.901 { 00:38:11.901 "subsystem": "bdev", 00:38:11.901 "config": [ 00:38:11.901 { 00:38:11.901 "params": { 00:38:11.901 "io_mechanism": "libaio", 00:38:11.901 "conserve_cpu": true, 00:38:11.901 "filename": "/dev/nvme0n1", 00:38:11.901 "name": "xnvme_bdev" 00:38:11.901 }, 00:38:11.901 "method": "bdev_xnvme_create" 00:38:11.901 }, 00:38:11.901 { 00:38:11.901 "method": "bdev_wait_for_examine" 00:38:11.901 } 00:38:11.901 ] 00:38:11.901 } 00:38:11.901 ] 00:38:11.901 } 00:38:11.901 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:38:11.901 fio-3.35 00:38:11.901 Starting 1 thread 00:38:18.460 00:38:18.460 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71633: Wed Nov 20 13:59:15 2024 00:38:18.460 read: IOPS=26.9k, BW=105MiB/s (110MB/s)(525MiB/5001msec) 00:38:18.460 slat (usec): min=5, max=1556, avg=33.77, stdev=54.30 00:38:18.460 clat (usec): min=90, max=6555, avg=1355.66, stdev=742.36 00:38:18.460 lat (usec): min=186, max=6659, avg=1389.43, stdev=743.62 00:38:18.460 clat percentiles (usec): 00:38:18.460 | 1.00th=[ 253], 5.00th=[ 367], 10.00th=[ 474], 20.00th=[ 676], 00:38:18.460 | 30.00th=[ 873], 40.00th=[ 1074], 50.00th=[ 1270], 60.00th=[ 1483], 00:38:18.460 | 70.00th=[ 1680], 80.00th=[ 1909], 90.00th=[ 2343], 95.00th=[ 2704], 00:38:18.460 | 99.00th=[ 3490], 99.50th=[ 4080], 99.90th=[ 5145], 99.95th=[ 5407], 00:38:18.460 | 99.99th=[ 5932] 00:38:18.460 bw ( KiB/s): min=79224, max=134976, per=97.77%, avg=105094.78, stdev=18190.07, samples=9 00:38:18.460 iops : min=19806, max=33744, avg=26273.67, stdev=4547.53, samples=9 00:38:18.460 lat (usec) : 100=0.01%, 250=0.97%, 500=10.31%, 750=12.38%, 1000=12.61% 00:38:18.460 lat (msec) : 2=46.42%, 4=16.77%, 10=0.54% 00:38:18.460 cpu : usr=17.32%, sys=67.70%, ctx=27653, majf=0, minf=741 00:38:18.460 IO depths : 1=0.1%, 2=1.0%, 4=4.5%, 8=12.1%, 16=26.5%, 32=54.2%, >=64=1.7% 00:38:18.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.460 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:38:18.460 issued rwts: total=134386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:38:18.460 00:38:18.460 Run status group 0 (all jobs): 00:38:18.460 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=525MiB (550MB), run=5001-5001msec 00:38:19.834 ----------------------------------------------------- 00:38:19.834 Suppressions used: 00:38:19.834 count bytes template 00:38:19.834 1 11 /usr/src/fio/parse.c 00:38:19.834 1 8 libtcmalloc_minimal.so 00:38:19.834 1 904 libcrypto.so 00:38:19.834 ----------------------------------------------------- 00:38:19.834 00:38:19.834 13:59:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:19.834 13:59:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:19.834 13:59:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:38:19.834 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:19.834 13:59:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:38:19.834 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:19.834 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:19.834 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:19.834 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:19.834 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:19.834 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:38:19.834 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:19.834 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:19.834 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:38:19.835 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:19.835 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:19.835 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:19.835 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:19.835 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:38:19.835 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:19.835 13:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:19.835 { 00:38:19.835 "subsystems": [ 00:38:19.835 { 00:38:19.835 "subsystem": "bdev", 00:38:19.835 "config": [ 00:38:19.835 { 00:38:19.835 "params": { 00:38:19.835 "io_mechanism": "libaio", 00:38:19.835 "conserve_cpu": true, 00:38:19.835 "filename": "/dev/nvme0n1", 00:38:19.835 "name": "xnvme_bdev" 00:38:19.835 }, 00:38:19.835 "method": "bdev_xnvme_create" 00:38:19.835 }, 00:38:19.835 { 00:38:19.835 "method": "bdev_wait_for_examine" 00:38:19.835 } 00:38:19.835 ] 00:38:19.835 } 00:38:19.835 ] 00:38:19.835 } 00:38:19.835 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:38:19.835 fio-3.35 00:38:19.835 Starting 1 thread 00:38:26.418 00:38:26.418 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71736: Wed Nov 20 13:59:23 2024 00:38:26.418 write: IOPS=21.9k, BW=85.6MiB/s (89.7MB/s)(428MiB/5001msec); 0 zone resets 00:38:26.418 slat (usec): min=5, max=6886, avg=39.84, stdev=55.93 00:38:26.418 clat (usec): min=106, max=14659, avg=1696.27, stdev=1226.70 00:38:26.418 lat (usec): min=190, max=14692, avg=1736.11, stdev=1226.48 00:38:26.418 clat percentiles (usec): 00:38:26.418 | 1.00th=[ 281], 5.00th=[ 416], 10.00th=[ 545], 20.00th=[ 799], 00:38:26.418 | 30.00th=[ 1057], 40.00th=[ 1303], 50.00th=[ 1549], 60.00th=[ 1795], 00:38:26.418 | 70.00th=[ 2057], 80.00th=[ 2343], 90.00th=[ 2704], 95.00th=[ 3064], 00:38:26.418 | 99.00th=[ 8029], 99.50th=[ 9241], 99.90th=[12780], 99.95th=[13698], 00:38:26.418 | 99.99th=[14484] 00:38:26.418 bw ( KiB/s): min=81448, max=110800, per=99.84%, avg=87488.78, stdev=9435.53, samples=9 00:38:26.418 iops : min=20362, max=27700, avg=21872.00, stdev=2359.00, samples=9 00:38:26.418 lat (usec) : 250=0.54%, 500=7.66%, 750=9.84%, 1000=9.95% 00:38:26.418 lat (msec) : 2=39.83%, 4=29.72%, 10=2.18%, 20=0.29% 00:38:26.418 cpu : usr=22.38%, sys=56.18%, ctx=98, majf=0, minf=764 00:38:26.418 IO depths : 1=0.1%, 2=1.1%, 4=5.1%, 8=12.5%, 16=26.1%, 32=53.4%, >=64=1.7% 00:38:26.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:26.418 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:38:26.418 issued rwts: total=0,109553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:26.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:38:26.418 00:38:26.418 Run status group 0 (all jobs): 00:38:26.418 WRITE: bw=85.6MiB/s (89.7MB/s), 85.6MiB/s-85.6MiB/s (89.7MB/s-89.7MB/s), io=428MiB (449MB), run=5001-5001msec 00:38:27.352 ----------------------------------------------------- 00:38:27.352 Suppressions used: 00:38:27.352 count bytes template 00:38:27.352 1 11 /usr/src/fio/parse.c 00:38:27.352 1 8 libtcmalloc_minimal.so 00:38:27.352 1 904 libcrypto.so 00:38:27.352 ----------------------------------------------------- 00:38:27.352 00:38:27.352 00:38:27.352 real 0m15.753s 00:38:27.352 user 0m6.572s 00:38:27.352 sys 0m6.997s 00:38:27.352 13:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:27.352 13:59:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:27.352 ************************************ 00:38:27.352 END TEST xnvme_fio_plugin 00:38:27.352 ************************************ 00:38:27.610 13:59:24 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:38:27.610 13:59:24 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:38:27.610 13:59:24 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:38:27.610 13:59:24 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:38:27.610 13:59:24 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:38:27.610 13:59:24 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:38:27.610 13:59:24 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:38:27.610 13:59:24 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:38:27.610 13:59:24 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:38:27.610 13:59:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:27.610 13:59:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:27.610 13:59:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:27.610 ************************************ 00:38:27.610 START TEST xnvme_rpc 00:38:27.610 ************************************ 00:38:27.610 13:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:38:27.610 13:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:38:27.610 13:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:38:27.610 13:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:38:27.610 13:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:38:27.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:27.610 13:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71828 00:38:27.610 13:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71828 00:38:27.610 13:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71828 ']' 00:38:27.610 13:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:27.610 13:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:27.610 13:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:27.610 13:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:27.610 13:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:27.610 13:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:27.610 [2024-11-20 13:59:24.832264] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:27.610 [2024-11-20 13:59:24.832748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71828 ] 00:38:27.868 [2024-11-20 13:59:25.029210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.868 [2024-11-20 13:59:25.141183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:29.242 xnvme_bdev 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.242 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71828 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71828 ']' 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71828 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:29.243 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71828 00:38:29.501 killing process with pid 71828 00:38:29.501 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:29.501 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:29.501 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71828' 00:38:29.501 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71828 00:38:29.501 13:59:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71828 00:38:32.035 ************************************ 00:38:32.035 END TEST xnvme_rpc 00:38:32.035 ************************************ 00:38:32.035 00:38:32.035 real 0m4.375s 00:38:32.035 user 0m4.457s 00:38:32.035 sys 0m0.662s 00:38:32.035 13:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:32.035 13:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:32.035 13:59:29 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:38:32.035 13:59:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:32.035 13:59:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:32.035 13:59:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:32.035 ************************************ 00:38:32.035 START TEST xnvme_bdevperf 00:38:32.035 ************************************ 00:38:32.035 13:59:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:38:32.035 13:59:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:38:32.035 13:59:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:38:32.035 13:59:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:32.035 13:59:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:38:32.035 13:59:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:32.035 13:59:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:32.035 13:59:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:32.035 { 00:38:32.035 "subsystems": [ 00:38:32.035 { 00:38:32.035 "subsystem": "bdev", 00:38:32.035 "config": [ 00:38:32.035 { 00:38:32.035 "params": { 00:38:32.035 "io_mechanism": "io_uring", 00:38:32.035 "conserve_cpu": false, 00:38:32.035 "filename": "/dev/nvme0n1", 00:38:32.035 "name": "xnvme_bdev" 00:38:32.035 }, 00:38:32.035 "method": "bdev_xnvme_create" 00:38:32.035 }, 00:38:32.035 { 00:38:32.035 "method": "bdev_wait_for_examine" 00:38:32.035 } 00:38:32.035 ] 00:38:32.035 } 00:38:32.035 ] 00:38:32.035 } 00:38:32.035 [2024-11-20 13:59:29.245140] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:32.035 [2024-11-20 13:59:29.245353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71914 ] 00:38:32.294 [2024-11-20 13:59:29.440252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.294 [2024-11-20 13:59:29.556872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.860 Running I/O for 5 seconds... 00:38:34.733 33311.00 IOPS, 130.12 MiB/s [2024-11-20T13:59:32.994Z] 34030.50 IOPS, 132.93 MiB/s [2024-11-20T13:59:34.373Z] 33902.00 IOPS, 132.43 MiB/s [2024-11-20T13:59:34.942Z] 33875.25 IOPS, 132.33 MiB/s 00:38:37.619 Latency(us) 00:38:37.619 [2024-11-20T13:59:34.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.619 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:38:37.619 xnvme_bdev : 5.00 33774.40 131.93 0.00 0.00 1890.40 407.65 9674.36 00:38:37.619 [2024-11-20T13:59:34.942Z] =================================================================================================================== 00:38:37.619 [2024-11-20T13:59:34.942Z] Total : 33774.40 131.93 0.00 0.00 1890.40 407.65 9674.36 00:38:38.999 13:59:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:38.999 13:59:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:38:38.999 13:59:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:38.999 13:59:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:38.999 13:59:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:38.999 { 00:38:38.999 "subsystems": [ 00:38:38.999 { 00:38:38.999 "subsystem": "bdev", 00:38:38.999 "config": [ 00:38:38.999 { 00:38:38.999 "params": { 00:38:38.999 "io_mechanism": "io_uring", 00:38:38.999 "conserve_cpu": false, 00:38:38.999 "filename": "/dev/nvme0n1", 00:38:38.999 "name": "xnvme_bdev" 00:38:38.999 }, 00:38:38.999 "method": "bdev_xnvme_create" 00:38:38.999 }, 00:38:38.999 { 00:38:38.999 "method": "bdev_wait_for_examine" 00:38:38.999 } 00:38:38.999 ] 00:38:38.999 } 00:38:38.999 ] 00:38:38.999 } 00:38:39.258 [2024-11-20 13:59:36.394660] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:39.258 [2024-11-20 13:59:36.394904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71996 ] 00:38:39.517 [2024-11-20 13:59:36.600741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:39.517 [2024-11-20 13:59:36.761001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:40.085 Running I/O for 5 seconds... 00:38:41.956 28932.00 IOPS, 113.02 MiB/s [2024-11-20T13:59:40.212Z] 28578.00 IOPS, 111.63 MiB/s [2024-11-20T13:59:41.589Z] 28958.33 IOPS, 113.12 MiB/s [2024-11-20T13:59:42.526Z] 29014.75 IOPS, 113.34 MiB/s [2024-11-20T13:59:42.526Z] 28831.00 IOPS, 112.62 MiB/s 00:38:45.203 Latency(us) 00:38:45.203 [2024-11-20T13:59:42.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:45.203 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:38:45.203 xnvme_bdev : 5.01 28787.82 112.45 0.00 0.00 2217.26 530.53 7489.83 00:38:45.203 [2024-11-20T13:59:42.526Z] =================================================================================================================== 00:38:45.203 [2024-11-20T13:59:42.526Z] Total : 28787.82 112.45 0.00 0.00 2217.26 530.53 7489.83 00:38:46.139 00:38:46.139 real 0m14.331s 00:38:46.139 user 0m6.731s 00:38:46.139 sys 0m7.241s 00:38:46.139 13:59:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:46.139 13:59:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:46.139 ************************************ 00:38:46.139 END TEST xnvme_bdevperf 00:38:46.139 ************************************ 00:38:46.398 13:59:43 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:38:46.398 13:59:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:46.398 13:59:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:46.398 13:59:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:46.398 ************************************ 00:38:46.398 START TEST xnvme_fio_plugin 00:38:46.398 ************************************ 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:46.398 13:59:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:46.398 { 00:38:46.398 "subsystems": [ 00:38:46.398 { 00:38:46.398 "subsystem": "bdev", 00:38:46.398 "config": [ 00:38:46.398 { 00:38:46.398 "params": { 00:38:46.398 "io_mechanism": "io_uring", 00:38:46.398 "conserve_cpu": false, 00:38:46.398 "filename": "/dev/nvme0n1", 00:38:46.398 "name": "xnvme_bdev" 00:38:46.398 }, 00:38:46.398 "method": "bdev_xnvme_create" 00:38:46.398 }, 00:38:46.398 { 00:38:46.398 "method": "bdev_wait_for_examine" 00:38:46.398 } 00:38:46.398 ] 00:38:46.398 } 00:38:46.398 ] 00:38:46.398 } 00:38:46.657 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:38:46.657 fio-3.35 00:38:46.657 Starting 1 thread 00:38:53.221 00:38:53.221 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72121: Wed Nov 20 13:59:49 2024 00:38:53.221 read: IOPS=31.4k, BW=123MiB/s (129MB/s)(614MiB/5001msec) 00:38:53.221 slat (usec): min=3, max=305, avg= 4.59, stdev= 1.89 00:38:53.221 clat (usec): min=204, max=6131, avg=1859.71, stdev=235.41 00:38:53.221 lat (usec): min=209, max=6160, avg=1864.30, stdev=235.73 00:38:53.221 clat percentiles (usec): 00:38:53.221 | 1.00th=[ 1450], 5.00th=[ 1614], 10.00th=[ 1663], 20.00th=[ 1713], 00:38:53.221 | 30.00th=[ 1762], 40.00th=[ 1795], 50.00th=[ 1827], 60.00th=[ 1876], 00:38:53.221 | 70.00th=[ 1909], 80.00th=[ 1975], 90.00th=[ 2073], 95.00th=[ 2180], 00:38:53.221 | 99.00th=[ 2573], 99.50th=[ 2868], 99.90th=[ 3851], 99.95th=[ 5604], 00:38:53.221 | 99.99th=[ 5997] 00:38:53.221 bw ( KiB/s): min=115489, max=132416, per=99.97%, avg=125609.89, stdev=5272.52, samples=9 00:38:53.221 iops : min=28872, max=33104, avg=31402.44, stdev=1318.19, samples=9 00:38:53.221 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.06% 00:38:53.221 lat (msec) : 2=84.03%, 4=15.82%, 10=0.09% 00:38:53.221 cpu : usr=30.72%, sys=65.34%, ctx=3227, majf=0, minf=762 00:38:53.221 IO depths : 1=1.2%, 2=2.7%, 4=5.9%, 8=12.1%, 16=24.3%, 32=51.7%, >=64=2.0% 00:38:53.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.221 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:38:53.221 issued rwts: total=157088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.221 latency : target=0, window=0, percentile=100.00%, depth=64 00:38:53.221 00:38:53.221 Run status group 0 (all jobs): 00:38:53.221 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=614MiB (643MB), run=5001-5001msec 00:38:53.789 ----------------------------------------------------- 00:38:53.789 Suppressions used: 00:38:53.789 count bytes template 00:38:53.789 1 11 /usr/src/fio/parse.c 00:38:53.789 1 8 libtcmalloc_minimal.so 00:38:53.789 1 904 libcrypto.so 00:38:53.789 ----------------------------------------------------- 00:38:53.789 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:54.049 13:59:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:54.049 { 00:38:54.049 "subsystems": [ 00:38:54.049 { 00:38:54.049 "subsystem": "bdev", 00:38:54.049 "config": [ 00:38:54.049 { 00:38:54.049 "params": { 00:38:54.049 "io_mechanism": "io_uring", 00:38:54.049 "conserve_cpu": false, 00:38:54.049 "filename": "/dev/nvme0n1", 00:38:54.049 "name": "xnvme_bdev" 00:38:54.049 }, 00:38:54.049 "method": "bdev_xnvme_create" 00:38:54.049 }, 00:38:54.049 { 00:38:54.049 "method": "bdev_wait_for_examine" 00:38:54.049 } 00:38:54.049 ] 00:38:54.049 } 00:38:54.049 ] 00:38:54.049 } 00:38:54.049 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:38:54.049 fio-3.35 00:38:54.049 Starting 1 thread 00:39:00.620 00:39:00.620 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72218: Wed Nov 20 13:59:57 2024 00:39:00.620 write: IOPS=31.2k, BW=122MiB/s (128MB/s)(610MiB/5002msec); 0 zone resets 00:39:00.620 slat (nsec): min=3047, max=92723, avg=5116.47, stdev=1912.11 00:39:00.620 clat (usec): min=1327, max=4032, avg=1847.91, stdev=210.25 00:39:00.620 lat (usec): min=1331, max=4071, avg=1853.03, stdev=210.96 00:39:00.620 clat percentiles (usec): 00:39:00.620 | 1.00th=[ 1516], 5.00th=[ 1582], 10.00th=[ 1631], 20.00th=[ 1680], 00:39:00.620 | 30.00th=[ 1729], 40.00th=[ 1778], 50.00th=[ 1811], 60.00th=[ 1860], 00:39:00.620 | 70.00th=[ 1909], 80.00th=[ 1975], 90.00th=[ 2114], 95.00th=[ 2245], 00:39:00.620 | 99.00th=[ 2540], 99.50th=[ 2638], 99.90th=[ 2966], 99.95th=[ 3097], 00:39:00.620 | 99.99th=[ 3884] 00:39:00.620 bw ( KiB/s): min=121856, max=131321, per=100.00%, avg=126321.00, stdev=3131.63, samples=9 00:39:00.620 iops : min=30464, max=32830, avg=31580.22, stdev=782.86, samples=9 00:39:00.620 lat (msec) : 2=82.87%, 4=17.13%, 10=0.01% 00:39:00.620 cpu : usr=30.79%, sys=68.37%, ctx=14, majf=0, minf=762 00:39:00.620 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:39:00.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:00.620 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:39:00.620 issued rwts: total=0,156150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:00.620 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:00.620 00:39:00.620 Run status group 0 (all jobs): 00:39:00.620 WRITE: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=610MiB (640MB), run=5002-5002msec 00:39:01.560 ----------------------------------------------------- 00:39:01.560 Suppressions used: 00:39:01.560 count bytes template 00:39:01.560 1 11 /usr/src/fio/parse.c 00:39:01.560 1 8 libtcmalloc_minimal.so 00:39:01.560 1 904 libcrypto.so 00:39:01.560 ----------------------------------------------------- 00:39:01.560 00:39:01.560 00:39:01.560 real 0m15.178s 00:39:01.560 user 0m7.358s 00:39:01.560 sys 0m7.299s 00:39:01.560 13:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:01.560 13:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:39:01.560 ************************************ 00:39:01.560 END TEST xnvme_fio_plugin 00:39:01.560 ************************************ 00:39:01.560 13:59:58 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:39:01.560 13:59:58 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:39:01.560 13:59:58 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:39:01.560 13:59:58 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:39:01.560 13:59:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:01.560 13:59:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:01.560 13:59:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:01.560 ************************************ 00:39:01.560 START TEST xnvme_rpc 00:39:01.560 ************************************ 00:39:01.560 13:59:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:39:01.560 13:59:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:39:01.560 13:59:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:39:01.560 13:59:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:39:01.560 13:59:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:39:01.560 13:59:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72306 00:39:01.560 13:59:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72306 00:39:01.560 13:59:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72306 ']' 00:39:01.560 13:59:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:01.560 13:59:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:01.560 13:59:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:01.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:01.560 13:59:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:01.560 13:59:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:01.560 13:59:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:01.819 [2024-11-20 13:59:58.900523] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:01.820 [2024-11-20 13:59:58.901431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72306 ] 00:39:01.820 [2024-11-20 13:59:59.096444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.088 [2024-11-20 13:59:59.216652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.028 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:03.028 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:03.029 xnvme_bdev 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72306 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72306 ']' 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72306 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:03.029 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72306 00:39:03.288 killing process with pid 72306 00:39:03.288 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:03.288 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:03.288 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72306' 00:39:03.288 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72306 00:39:03.288 14:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72306 00:39:05.823 00:39:05.823 real 0m4.107s 00:39:05.823 user 0m4.260s 00:39:05.823 sys 0m0.557s 00:39:05.823 14:00:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:05.823 ************************************ 00:39:05.823 END TEST xnvme_rpc 00:39:05.823 ************************************ 00:39:05.823 14:00:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:05.823 14:00:02 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:39:05.823 14:00:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:05.823 14:00:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:05.823 14:00:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:05.823 ************************************ 00:39:05.823 START TEST xnvme_bdevperf 00:39:05.823 ************************************ 00:39:05.823 14:00:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:39:05.823 14:00:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:39:05.823 14:00:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:39:05.823 14:00:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:05.823 14:00:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:39:05.823 14:00:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:39:05.823 14:00:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:39:05.823 14:00:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:05.823 { 00:39:05.823 "subsystems": [ 00:39:05.823 { 00:39:05.823 "subsystem": "bdev", 00:39:05.823 "config": [ 00:39:05.823 { 00:39:05.823 "params": { 00:39:05.823 "io_mechanism": "io_uring", 00:39:05.823 "conserve_cpu": true, 00:39:05.823 "filename": "/dev/nvme0n1", 00:39:05.823 "name": "xnvme_bdev" 00:39:05.823 }, 00:39:05.823 "method": "bdev_xnvme_create" 00:39:05.823 }, 00:39:05.823 { 00:39:05.823 "method": "bdev_wait_for_examine" 00:39:05.823 } 00:39:05.823 ] 00:39:05.823 } 00:39:05.823 ] 00:39:05.823 } 00:39:05.823 [2024-11-20 14:00:03.045638] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:05.823 [2024-11-20 14:00:03.045821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72391 ] 00:39:06.083 [2024-11-20 14:00:03.240295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.083 [2024-11-20 14:00:03.361319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:06.651 Running I/O for 5 seconds... 00:39:08.521 30524.00 IOPS, 119.23 MiB/s [2024-11-20T14:00:06.781Z] 30322.50 IOPS, 118.45 MiB/s [2024-11-20T14:00:08.160Z] 30596.67 IOPS, 119.52 MiB/s [2024-11-20T14:00:09.097Z] 30986.00 IOPS, 121.04 MiB/s [2024-11-20T14:00:09.097Z] 31173.40 IOPS, 121.77 MiB/s 00:39:11.774 Latency(us) 00:39:11.774 [2024-11-20T14:00:09.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.774 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:39:11.774 xnvme_bdev : 5.01 31144.12 121.66 0.00 0.00 2050.17 1037.65 9799.19 00:39:11.774 [2024-11-20T14:00:09.097Z] =================================================================================================================== 00:39:11.774 [2024-11-20T14:00:09.097Z] Total : 31144.12 121.66 0.00 0.00 2050.17 1037.65 9799.19 00:39:13.153 14:00:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:13.153 14:00:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:39:13.153 14:00:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:39:13.153 14:00:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:39:13.153 14:00:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:13.153 { 00:39:13.153 "subsystems": [ 00:39:13.153 { 00:39:13.153 "subsystem": "bdev", 00:39:13.153 "config": [ 00:39:13.153 { 00:39:13.153 "params": { 00:39:13.153 "io_mechanism": "io_uring", 00:39:13.153 "conserve_cpu": true, 00:39:13.153 "filename": "/dev/nvme0n1", 00:39:13.153 "name": "xnvme_bdev" 00:39:13.153 }, 00:39:13.153 "method": "bdev_xnvme_create" 00:39:13.153 }, 00:39:13.153 { 00:39:13.153 "method": "bdev_wait_for_examine" 00:39:13.153 } 00:39:13.153 ] 00:39:13.153 } 00:39:13.153 ] 00:39:13.153 } 00:39:13.153 [2024-11-20 14:00:10.177421] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:13.153 [2024-11-20 14:00:10.177610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72475 ] 00:39:13.153 [2024-11-20 14:00:10.366778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.413 [2024-11-20 14:00:10.487007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.672 Running I/O for 5 seconds... 00:39:15.548 28928.00 IOPS, 113.00 MiB/s [2024-11-20T14:00:14.249Z] 29152.00 IOPS, 113.88 MiB/s [2024-11-20T14:00:15.187Z] 29184.00 IOPS, 114.00 MiB/s [2024-11-20T14:00:16.176Z] 28832.00 IOPS, 112.62 MiB/s 00:39:18.853 Latency(us) 00:39:18.853 [2024-11-20T14:00:16.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.853 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:39:18.853 xnvme_bdev : 5.00 28606.45 111.74 0.00 0.00 2231.36 1388.74 12170.97 00:39:18.853 [2024-11-20T14:00:16.176Z] =================================================================================================================== 00:39:18.853 [2024-11-20T14:00:16.176Z] Total : 28606.45 111.74 0.00 0.00 2231.36 1388.74 12170.97 00:39:19.798 ************************************ 00:39:19.798 END TEST xnvme_bdevperf 00:39:19.798 ************************************ 00:39:19.798 00:39:19.798 real 0m14.187s 00:39:19.798 user 0m7.206s 00:39:19.798 sys 0m6.444s 00:39:19.798 14:00:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:19.798 14:00:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:20.056 14:00:17 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:39:20.056 14:00:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:20.056 14:00:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:20.056 14:00:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:20.056 ************************************ 00:39:20.057 START TEST xnvme_fio_plugin 00:39:20.057 ************************************ 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:20.057 14:00:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:20.057 { 00:39:20.057 "subsystems": [ 00:39:20.057 { 00:39:20.057 "subsystem": "bdev", 00:39:20.057 "config": [ 00:39:20.057 { 00:39:20.057 "params": { 00:39:20.057 "io_mechanism": "io_uring", 00:39:20.057 "conserve_cpu": true, 00:39:20.057 "filename": "/dev/nvme0n1", 00:39:20.057 "name": "xnvme_bdev" 00:39:20.057 }, 00:39:20.057 "method": "bdev_xnvme_create" 00:39:20.057 }, 00:39:20.057 { 00:39:20.057 "method": "bdev_wait_for_examine" 00:39:20.057 } 00:39:20.057 ] 00:39:20.057 } 00:39:20.057 ] 00:39:20.057 } 00:39:20.316 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:39:20.316 fio-3.35 00:39:20.316 Starting 1 thread 00:39:26.880 00:39:26.880 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72601: Wed Nov 20 14:00:23 2024 00:39:26.880 read: IOPS=31.1k, BW=121MiB/s (127MB/s)(608MiB/5001msec) 00:39:26.880 slat (nsec): min=2554, max=72471, avg=4497.27, stdev=1625.43 00:39:26.880 clat (usec): min=1378, max=6571, avg=1878.41, stdev=250.22 00:39:26.880 lat (usec): min=1382, max=6577, avg=1882.91, stdev=250.73 00:39:26.880 clat percentiles (usec): 00:39:26.880 | 1.00th=[ 1565], 5.00th=[ 1631], 10.00th=[ 1680], 20.00th=[ 1729], 00:39:26.880 | 30.00th=[ 1762], 40.00th=[ 1811], 50.00th=[ 1844], 60.00th=[ 1876], 00:39:26.880 | 70.00th=[ 1926], 80.00th=[ 1975], 90.00th=[ 2089], 95.00th=[ 2212], 00:39:26.880 | 99.00th=[ 2769], 99.50th=[ 3392], 99.90th=[ 4686], 99.95th=[ 4817], 00:39:26.880 | 99.99th=[ 5014] 00:39:26.880 bw ( KiB/s): min=116224, max=133120, per=99.81%, avg=124176.44, stdev=5002.88, samples=9 00:39:26.880 iops : min=29056, max=33280, avg=31044.00, stdev=1250.76, samples=9 00:39:26.880 lat (msec) : 2=82.69%, 4=17.06%, 10=0.24% 00:39:26.880 cpu : usr=34.26%, sys=61.90%, ctx=2458, majf=0, minf=762 00:39:26.880 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:39:26.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.880 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:39:26.880 issued rwts: total=155544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:26.880 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:26.880 00:39:26.880 Run status group 0 (all jobs): 00:39:26.880 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=608MiB (637MB), run=5001-5001msec 00:39:27.816 ----------------------------------------------------- 00:39:27.816 Suppressions used: 00:39:27.816 count bytes template 00:39:27.816 1 11 /usr/src/fio/parse.c 00:39:27.816 1 8 libtcmalloc_minimal.so 00:39:27.816 1 904 libcrypto.so 00:39:27.816 ----------------------------------------------------- 00:39:27.816 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:27.816 14:00:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:27.816 { 00:39:27.816 "subsystems": [ 00:39:27.816 { 00:39:27.816 "subsystem": "bdev", 00:39:27.816 "config": [ 00:39:27.816 { 00:39:27.816 "params": { 00:39:27.816 "io_mechanism": "io_uring", 00:39:27.816 "conserve_cpu": true, 00:39:27.816 "filename": "/dev/nvme0n1", 00:39:27.816 "name": "xnvme_bdev" 00:39:27.816 }, 00:39:27.816 "method": "bdev_xnvme_create" 00:39:27.816 }, 00:39:27.816 { 00:39:27.816 "method": "bdev_wait_for_examine" 00:39:27.816 } 00:39:27.816 ] 00:39:27.816 } 00:39:27.816 ] 00:39:27.816 } 00:39:27.816 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:39:27.816 fio-3.35 00:39:27.816 Starting 1 thread 00:39:34.425 00:39:34.425 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72700: Wed Nov 20 14:00:30 2024 00:39:34.425 write: IOPS=8517, BW=33.3MiB/s (34.9MB/s)(167MiB/5005msec); 0 zone resets 00:39:34.425 slat (usec): min=2, max=226, avg= 3.98, stdev= 3.53 00:39:34.425 clat (usec): min=81, max=39659, avg=7505.61, stdev=5704.03 00:39:34.425 lat (usec): min=84, max=39662, avg=7509.59, stdev=5703.81 00:39:34.425 clat percentiles (usec): 00:39:34.425 | 1.00th=[ 126], 5.00th=[ 163], 10.00th=[ 212], 20.00th=[ 310], 00:39:34.425 | 30.00th=[ 5669], 40.00th=[ 6259], 50.00th=[ 6915], 60.00th=[ 7898], 00:39:34.425 | 70.00th=[ 8979], 80.00th=[12256], 90.00th=[15795], 95.00th=[17433], 00:39:34.425 | 99.00th=[23987], 99.50th=[27132], 99.90th=[32900], 99.95th=[36439], 00:39:34.425 | 99.99th=[39060] 00:39:34.425 bw ( KiB/s): min=19520, max=51920, per=91.49%, avg=31169.44, stdev=11214.30, samples=9 00:39:34.425 iops : min= 4880, max=12980, avg=7792.33, stdev=2803.60, samples=9 00:39:34.425 lat (usec) : 100=0.10%, 250=14.00%, 500=9.42%, 750=0.40%, 1000=0.03% 00:39:34.425 lat (msec) : 2=0.01%, 4=0.03%, 10=50.30%, 20=23.75%, 50=1.95% 00:39:34.425 cpu : usr=70.08%, sys=20.00%, ctx=18, majf=0, minf=762 00:39:34.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=26.4%, >=64=73.6% 00:39:34.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:34.425 complete : 0=0.0%, 4=99.3%, 8=0.6%, 16=0.1%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:34.425 issued rwts: total=0,42629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:34.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:34.425 00:39:34.425 Run status group 0 (all jobs): 00:39:34.425 WRITE: bw=33.3MiB/s (34.9MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-34.9MB/s), io=167MiB (175MB), run=5005-5005msec 00:39:34.994 ----------------------------------------------------- 00:39:34.994 Suppressions used: 00:39:34.994 count bytes template 00:39:34.994 1 11 /usr/src/fio/parse.c 00:39:34.994 1 8 libtcmalloc_minimal.so 00:39:34.994 1 904 libcrypto.so 00:39:34.994 ----------------------------------------------------- 00:39:34.994 00:39:35.253 00:39:35.253 real 0m15.155s 00:39:35.253 user 0m9.508s 00:39:35.253 sys 0m4.684s 00:39:35.253 14:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:35.253 14:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:39:35.253 ************************************ 00:39:35.253 END TEST xnvme_fio_plugin 00:39:35.253 ************************************ 00:39:35.253 14:00:32 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:39:35.253 14:00:32 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:39:35.253 14:00:32 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:39:35.253 14:00:32 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:39:35.253 14:00:32 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:39:35.253 14:00:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:39:35.253 14:00:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:39:35.253 14:00:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:39:35.253 14:00:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:39:35.253 14:00:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:35.253 14:00:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:35.253 14:00:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:35.253 ************************************ 00:39:35.253 START TEST xnvme_rpc 00:39:35.253 ************************************ 00:39:35.253 14:00:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:39:35.253 14:00:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:39:35.253 14:00:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:39:35.253 14:00:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:39:35.253 14:00:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:39:35.253 14:00:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72796 00:39:35.253 14:00:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72796 00:39:35.253 14:00:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:35.253 14:00:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72796 ']' 00:39:35.253 14:00:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:35.253 14:00:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:35.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:35.253 14:00:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:35.253 14:00:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:35.253 14:00:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:35.253 [2024-11-20 14:00:32.523426] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:35.253 [2024-11-20 14:00:32.523629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72796 ] 00:39:35.512 [2024-11-20 14:00:32.723952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:35.771 [2024-11-20 14:00:32.902765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:36.707 xnvme_bdev 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72796 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72796 ']' 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72796 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:36.707 14:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72796 00:39:36.967 14:00:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:36.967 14:00:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:36.967 killing process with pid 72796 00:39:36.967 14:00:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72796' 00:39:36.967 14:00:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72796 00:39:36.967 14:00:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72796 00:39:39.503 00:39:39.503 real 0m4.131s 00:39:39.503 user 0m4.198s 00:39:39.503 sys 0m0.548s 00:39:39.503 14:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:39.503 14:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:39.503 ************************************ 00:39:39.503 END TEST xnvme_rpc 00:39:39.503 ************************************ 00:39:39.503 14:00:36 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:39:39.503 14:00:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:39.503 14:00:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:39.503 14:00:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:39.503 ************************************ 00:39:39.503 START TEST xnvme_bdevperf 00:39:39.503 ************************************ 00:39:39.503 14:00:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:39:39.503 14:00:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:39:39.503 14:00:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:39:39.503 14:00:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:39.503 14:00:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:39:39.503 14:00:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:39:39.503 14:00:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:39:39.503 14:00:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:39.503 { 00:39:39.503 "subsystems": [ 00:39:39.503 { 00:39:39.503 "subsystem": "bdev", 00:39:39.503 "config": [ 00:39:39.503 { 00:39:39.503 "params": { 00:39:39.503 "io_mechanism": "io_uring_cmd", 00:39:39.503 "conserve_cpu": false, 00:39:39.503 "filename": "/dev/ng0n1", 00:39:39.503 "name": "xnvme_bdev" 00:39:39.503 }, 00:39:39.503 "method": "bdev_xnvme_create" 00:39:39.503 }, 00:39:39.503 { 00:39:39.503 "method": "bdev_wait_for_examine" 00:39:39.503 } 00:39:39.503 ] 00:39:39.503 } 00:39:39.503 ] 00:39:39.503 } 00:39:39.503 [2024-11-20 14:00:36.696519] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:39.503 [2024-11-20 14:00:36.696754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72877 ] 00:39:39.764 [2024-11-20 14:00:36.888873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.764 [2024-11-20 14:00:37.012274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.336 Running I/O for 5 seconds... 00:39:42.204 49318.00 IOPS, 192.65 MiB/s [2024-11-20T14:00:40.460Z] 50836.00 IOPS, 198.58 MiB/s [2024-11-20T14:00:41.396Z] 50823.00 IOPS, 198.53 MiB/s [2024-11-20T14:00:42.772Z] 51644.50 IOPS, 201.74 MiB/s [2024-11-20T14:00:42.772Z] 51845.40 IOPS, 202.52 MiB/s 00:39:45.449 Latency(us) 00:39:45.449 [2024-11-20T14:00:42.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:45.449 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:39:45.449 xnvme_bdev : 5.01 51803.24 202.36 0.00 0.00 1231.40 485.67 4649.94 00:39:45.449 [2024-11-20T14:00:42.772Z] =================================================================================================================== 00:39:45.449 [2024-11-20T14:00:42.772Z] Total : 51803.24 202.36 0.00 0.00 1231.40 485.67 4649.94 00:39:46.386 14:00:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:46.386 14:00:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:39:46.386 14:00:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:39:46.386 14:00:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:39:46.386 14:00:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:46.386 { 00:39:46.386 "subsystems": [ 00:39:46.386 { 00:39:46.386 "subsystem": "bdev", 00:39:46.386 "config": [ 00:39:46.386 { 00:39:46.386 "params": { 00:39:46.386 "io_mechanism": "io_uring_cmd", 00:39:46.386 "conserve_cpu": false, 00:39:46.386 "filename": "/dev/ng0n1", 00:39:46.386 "name": "xnvme_bdev" 00:39:46.386 }, 00:39:46.386 "method": "bdev_xnvme_create" 00:39:46.386 }, 00:39:46.386 { 00:39:46.386 "method": "bdev_wait_for_examine" 00:39:46.386 } 00:39:46.386 ] 00:39:46.386 } 00:39:46.386 ] 00:39:46.386 } 00:39:46.386 [2024-11-20 14:00:43.668062] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:46.386 [2024-11-20 14:00:43.668227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72952 ] 00:39:46.645 [2024-11-20 14:00:43.849056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:46.645 [2024-11-20 14:00:43.964909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:47.212 Running I/O for 5 seconds... 00:39:49.145 43276.00 IOPS, 169.05 MiB/s [2024-11-20T14:00:47.404Z] 42734.50 IOPS, 166.93 MiB/s [2024-11-20T14:00:48.339Z] 43053.00 IOPS, 168.18 MiB/s [2024-11-20T14:00:49.714Z] 43265.75 IOPS, 169.01 MiB/s [2024-11-20T14:00:49.714Z] 42659.20 IOPS, 166.64 MiB/s 00:39:52.391 Latency(us) 00:39:52.391 [2024-11-20T14:00:49.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:52.391 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:39:52.391 xnvme_bdev : 5.01 42619.98 166.48 0.00 0.00 1495.85 542.23 5305.30 00:39:52.391 [2024-11-20T14:00:49.714Z] =================================================================================================================== 00:39:52.391 [2024-11-20T14:00:49.714Z] Total : 42619.98 166.48 0.00 0.00 1495.85 542.23 5305.30 00:39:53.769 14:00:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:53.769 14:00:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:39:53.769 14:00:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:39:53.769 14:00:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:39:53.769 14:00:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:53.769 { 00:39:53.769 "subsystems": [ 00:39:53.769 { 00:39:53.769 "subsystem": "bdev", 00:39:53.769 "config": [ 00:39:53.769 { 00:39:53.769 "params": { 00:39:53.769 "io_mechanism": "io_uring_cmd", 00:39:53.769 "conserve_cpu": false, 00:39:53.770 "filename": "/dev/ng0n1", 00:39:53.770 "name": "xnvme_bdev" 00:39:53.770 }, 00:39:53.770 "method": "bdev_xnvme_create" 00:39:53.770 }, 00:39:53.770 { 00:39:53.770 "method": "bdev_wait_for_examine" 00:39:53.770 } 00:39:53.770 ] 00:39:53.770 } 00:39:53.770 ] 00:39:53.770 } 00:39:53.770 [2024-11-20 14:00:50.839708] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:53.770 [2024-11-20 14:00:50.840831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73032 ] 00:39:53.770 [2024-11-20 14:00:51.024614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:54.028 [2024-11-20 14:00:51.146640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.287 Running I/O for 5 seconds... 00:39:56.183 72433.00 IOPS, 282.94 MiB/s [2024-11-20T14:00:54.883Z] 66375.00 IOPS, 259.28 MiB/s [2024-11-20T14:00:55.818Z] 64712.00 IOPS, 252.78 MiB/s [2024-11-20T14:00:56.760Z] 64305.00 IOPS, 251.19 MiB/s [2024-11-20T14:00:56.760Z] 63587.40 IOPS, 248.39 MiB/s 00:39:59.437 Latency(us) 00:39:59.437 [2024-11-20T14:00:56.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.437 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:39:59.437 xnvme_bdev : 5.00 63572.20 248.33 0.00 0.00 1003.63 358.89 8488.47 00:39:59.437 [2024-11-20T14:00:56.760Z] =================================================================================================================== 00:39:59.437 [2024-11-20T14:00:56.760Z] Total : 63572.20 248.33 0.00 0.00 1003.63 358.89 8488.47 00:40:00.373 14:00:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:40:00.373 14:00:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:40:00.373 14:00:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:40:00.373 14:00:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:40:00.373 14:00:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:00.632 { 00:40:00.632 "subsystems": [ 00:40:00.632 { 00:40:00.632 "subsystem": "bdev", 00:40:00.632 "config": [ 00:40:00.632 { 00:40:00.632 "params": { 00:40:00.632 "io_mechanism": "io_uring_cmd", 00:40:00.632 "conserve_cpu": false, 00:40:00.632 "filename": "/dev/ng0n1", 00:40:00.632 "name": "xnvme_bdev" 00:40:00.632 }, 00:40:00.632 "method": "bdev_xnvme_create" 00:40:00.632 }, 00:40:00.632 { 00:40:00.632 "method": "bdev_wait_for_examine" 00:40:00.632 } 00:40:00.632 ] 00:40:00.632 } 00:40:00.632 ] 00:40:00.632 } 00:40:00.632 [2024-11-20 14:00:57.774398] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:40:00.632 [2024-11-20 14:00:57.774581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73112 ] 00:40:00.632 [2024-11-20 14:00:57.952380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:00.890 [2024-11-20 14:00:58.071687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:01.148 Running I/O for 5 seconds... 00:40:03.458 1990.00 IOPS, 7.77 MiB/s [2024-11-20T14:01:01.716Z] 1272.50 IOPS, 4.97 MiB/s [2024-11-20T14:01:02.650Z] 1323.00 IOPS, 5.17 MiB/s [2024-11-20T14:01:03.586Z] 1418.25 IOPS, 5.54 MiB/s [2024-11-20T14:01:03.586Z] 6047.60 IOPS, 23.62 MiB/s 00:40:06.263 Latency(us) 00:40:06.263 [2024-11-20T14:01:03.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:06.263 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:40:06.263 xnvme_bdev : 5.01 6066.34 23.70 0.00 0.00 10545.97 135.56 379484.65 00:40:06.263 [2024-11-20T14:01:03.586Z] =================================================================================================================== 00:40:06.263 [2024-11-20T14:01:03.586Z] Total : 6066.34 23.70 0.00 0.00 10545.97 135.56 379484.65 00:40:07.639 00:40:07.639 real 0m28.113s 00:40:07.639 user 0m14.011s 00:40:07.639 sys 0m13.704s 00:40:07.639 14:01:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:07.639 14:01:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:07.639 ************************************ 00:40:07.639 END TEST xnvme_bdevperf 00:40:07.639 ************************************ 00:40:07.639 14:01:04 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:40:07.639 14:01:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:07.639 14:01:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:07.639 14:01:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:07.639 ************************************ 00:40:07.639 START TEST xnvme_fio_plugin 00:40:07.639 ************************************ 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:40:07.639 14:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:40:07.639 { 00:40:07.639 "subsystems": [ 00:40:07.639 { 00:40:07.639 "subsystem": "bdev", 00:40:07.639 "config": [ 00:40:07.639 { 00:40:07.639 "params": { 00:40:07.639 "io_mechanism": "io_uring_cmd", 00:40:07.639 "conserve_cpu": false, 00:40:07.639 "filename": "/dev/ng0n1", 00:40:07.640 "name": "xnvme_bdev" 00:40:07.640 }, 00:40:07.640 "method": "bdev_xnvme_create" 00:40:07.640 }, 00:40:07.640 { 00:40:07.640 "method": "bdev_wait_for_examine" 00:40:07.640 } 00:40:07.640 ] 00:40:07.640 } 00:40:07.640 ] 00:40:07.640 } 00:40:07.640 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:40:07.640 fio-3.35 00:40:07.640 Starting 1 thread 00:40:14.203 00:40:14.203 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73237: Wed Nov 20 14:01:10 2024 00:40:14.203 read: IOPS=54.3k, BW=212MiB/s (223MB/s)(1061MiB/5001msec) 00:40:14.203 slat (nsec): min=2578, max=96608, avg=3456.73, stdev=964.13 00:40:14.203 clat (usec): min=140, max=4737, avg=1041.40, stdev=158.96 00:40:14.203 lat (usec): min=143, max=4741, avg=1044.86, stdev=159.25 00:40:14.203 clat percentiles (usec): 00:40:14.203 | 1.00th=[ 807], 5.00th=[ 848], 10.00th=[ 881], 20.00th=[ 922], 00:40:14.203 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1020], 60.00th=[ 1057], 00:40:14.203 | 70.00th=[ 1090], 80.00th=[ 1156], 90.00th=[ 1221], 95.00th=[ 1287], 00:40:14.203 | 99.00th=[ 1418], 99.50th=[ 1500], 99.90th=[ 2507], 99.95th=[ 3228], 00:40:14.203 | 99.99th=[ 3916] 00:40:14.203 bw ( KiB/s): min=183441, max=233984, per=99.65%, avg=216517.44, stdev=18658.11, samples=9 00:40:14.203 iops : min=45860, max=58496, avg=54129.33, stdev=4664.58, samples=9 00:40:14.203 lat (usec) : 250=0.01%, 500=0.02%, 750=0.25%, 1000=43.29% 00:40:14.203 lat (msec) : 2=56.23%, 4=0.21%, 10=0.01% 00:40:14.203 cpu : usr=31.30%, sys=67.94%, ctx=10, majf=0, minf=762 00:40:14.203 IO depths : 1=1.4%, 2=3.0%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.4%, >=64=1.6% 00:40:14.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:14.203 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:40:14.203 issued rwts: total=271662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:14.203 latency : target=0, window=0, percentile=100.00%, depth=64 00:40:14.203 00:40:14.203 Run status group 0 (all jobs): 00:40:14.203 READ: bw=212MiB/s (223MB/s), 212MiB/s-212MiB/s (223MB/s-223MB/s), io=1061MiB (1113MB), run=5001-5001msec 00:40:15.165 ----------------------------------------------------- 00:40:15.165 Suppressions used: 00:40:15.165 count bytes template 00:40:15.165 1 11 /usr/src/fio/parse.c 00:40:15.165 1 8 libtcmalloc_minimal.so 00:40:15.165 1 904 libcrypto.so 00:40:15.165 ----------------------------------------------------- 00:40:15.165 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:40:15.165 14:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:40:15.165 { 00:40:15.165 "subsystems": [ 00:40:15.165 { 00:40:15.165 "subsystem": "bdev", 00:40:15.165 "config": [ 00:40:15.165 { 00:40:15.165 "params": { 00:40:15.165 "io_mechanism": "io_uring_cmd", 00:40:15.165 "conserve_cpu": false, 00:40:15.165 "filename": "/dev/ng0n1", 00:40:15.165 "name": "xnvme_bdev" 00:40:15.165 }, 00:40:15.165 "method": "bdev_xnvme_create" 00:40:15.165 }, 00:40:15.165 { 00:40:15.165 "method": "bdev_wait_for_examine" 00:40:15.165 } 00:40:15.165 ] 00:40:15.165 } 00:40:15.165 ] 00:40:15.165 } 00:40:15.424 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:40:15.424 fio-3.35 00:40:15.424 Starting 1 thread 00:40:21.990 00:40:21.990 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73333: Wed Nov 20 14:01:18 2024 00:40:21.990 write: IOPS=46.0k, BW=180MiB/s (188MB/s)(899MiB/5002msec); 0 zone resets 00:40:21.990 slat (nsec): min=2786, max=60189, avg=4528.87, stdev=1615.35 00:40:21.990 clat (usec): min=287, max=5953, avg=1214.12, stdev=188.84 00:40:21.990 lat (usec): min=291, max=5958, avg=1218.65, stdev=189.47 00:40:21.990 clat percentiles (usec): 00:40:21.990 | 1.00th=[ 898], 5.00th=[ 979], 10.00th=[ 1020], 20.00th=[ 1074], 00:40:21.990 | 30.00th=[ 1106], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1221], 00:40:21.990 | 70.00th=[ 1270], 80.00th=[ 1336], 90.00th=[ 1450], 95.00th=[ 1582], 00:40:21.990 | 99.00th=[ 1844], 99.50th=[ 1926], 99.90th=[ 2114], 99.95th=[ 2245], 00:40:21.990 | 99.99th=[ 2507] 00:40:21.990 bw ( KiB/s): min=171808, max=196872, per=100.00%, avg=185776.89, stdev=8470.26, samples=9 00:40:21.990 iops : min=42952, max=49218, avg=46444.22, stdev=2117.57, samples=9 00:40:21.990 lat (usec) : 500=0.01%, 750=0.58%, 1000=6.61% 00:40:21.990 lat (msec) : 2=92.55%, 4=0.26%, 10=0.01% 00:40:21.990 cpu : usr=35.35%, sys=63.79%, ctx=9, majf=0, minf=762 00:40:21.990 IO depths : 1=1.5%, 2=3.1%, 4=6.1%, 8=12.3%, 16=24.5%, 32=50.9%, >=64=1.6% 00:40:21.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.990 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:40:21.990 issued rwts: total=0,230050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.990 latency : target=0, window=0, percentile=100.00%, depth=64 00:40:21.990 00:40:21.990 Run status group 0 (all jobs): 00:40:21.990 WRITE: bw=180MiB/s (188MB/s), 180MiB/s-180MiB/s (188MB/s-188MB/s), io=899MiB (942MB), run=5002-5002msec 00:40:22.557 ----------------------------------------------------- 00:40:22.557 Suppressions used: 00:40:22.557 count bytes template 00:40:22.557 1 11 /usr/src/fio/parse.c 00:40:22.557 1 8 libtcmalloc_minimal.so 00:40:22.557 1 904 libcrypto.so 00:40:22.557 ----------------------------------------------------- 00:40:22.557 00:40:22.557 ************************************ 00:40:22.557 END TEST xnvme_fio_plugin 00:40:22.557 ************************************ 00:40:22.557 00:40:22.557 real 0m15.128s 00:40:22.557 user 0m7.454s 00:40:22.557 sys 0m7.312s 00:40:22.557 14:01:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:22.557 14:01:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:40:22.816 14:01:19 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:40:22.816 14:01:19 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:40:22.816 14:01:19 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:40:22.816 14:01:19 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:40:22.816 14:01:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:22.816 14:01:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:22.816 14:01:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:22.816 ************************************ 00:40:22.816 START TEST xnvme_rpc 00:40:22.816 ************************************ 00:40:22.816 14:01:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:40:22.816 14:01:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:40:22.816 14:01:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:40:22.816 14:01:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:40:22.816 14:01:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:40:22.816 14:01:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73423 00:40:22.816 14:01:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:22.816 14:01:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73423 00:40:22.816 14:01:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73423 ']' 00:40:22.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:22.816 14:01:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:22.816 14:01:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:22.816 14:01:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:22.816 14:01:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:22.816 14:01:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:22.816 [2024-11-20 14:01:20.070198] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:40:22.816 [2024-11-20 14:01:20.070641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73423 ] 00:40:23.075 [2024-11-20 14:01:20.264213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:23.075 [2024-11-20 14:01:20.382090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:24.010 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:24.010 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:40:24.010 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:40:24.010 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.010 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:24.010 xnvme_bdev 00:40:24.010 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.010 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:40:24.010 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:40:24.010 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:40:24.010 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.010 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:24.010 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73423 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73423 ']' 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73423 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73423 00:40:24.268 killing process with pid 73423 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73423' 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73423 00:40:24.268 14:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73423 00:40:26.803 00:40:26.803 real 0m4.060s 00:40:26.803 user 0m4.248s 00:40:26.803 sys 0m0.548s 00:40:26.803 ************************************ 00:40:26.803 END TEST xnvme_rpc 00:40:26.803 ************************************ 00:40:26.803 14:01:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:26.803 14:01:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:26.803 14:01:24 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:40:26.803 14:01:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:26.803 14:01:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:26.803 14:01:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:26.803 ************************************ 00:40:26.803 START TEST xnvme_bdevperf 00:40:26.803 ************************************ 00:40:26.803 14:01:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:40:26.803 14:01:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:40:26.803 14:01:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:40:26.803 14:01:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:40:26.803 14:01:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:40:26.803 14:01:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:40:26.803 14:01:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:40:26.803 14:01:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:26.803 { 00:40:26.803 "subsystems": [ 00:40:26.803 { 00:40:26.803 "subsystem": "bdev", 00:40:26.803 "config": [ 00:40:26.803 { 00:40:26.803 "params": { 00:40:26.803 "io_mechanism": "io_uring_cmd", 00:40:26.803 "conserve_cpu": true, 00:40:26.803 "filename": "/dev/ng0n1", 00:40:26.803 "name": "xnvme_bdev" 00:40:26.803 }, 00:40:26.803 "method": "bdev_xnvme_create" 00:40:26.803 }, 00:40:26.803 { 00:40:26.803 "method": "bdev_wait_for_examine" 00:40:26.803 } 00:40:26.803 ] 00:40:26.803 } 00:40:26.803 ] 00:40:26.803 } 00:40:27.062 [2024-11-20 14:01:24.172711] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:40:27.062 [2024-11-20 14:01:24.173084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73504 ] 00:40:27.062 [2024-11-20 14:01:24.363765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:27.321 [2024-11-20 14:01:24.479036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:27.580 Running I/O for 5 seconds... 00:40:29.916 48930.00 IOPS, 191.13 MiB/s [2024-11-20T14:01:28.171Z] 49313.50 IOPS, 192.63 MiB/s [2024-11-20T14:01:29.106Z] 50229.33 IOPS, 196.21 MiB/s [2024-11-20T14:01:30.042Z] 50349.00 IOPS, 196.68 MiB/s [2024-11-20T14:01:30.042Z] 49965.40 IOPS, 195.18 MiB/s 00:40:32.719 Latency(us) 00:40:32.719 [2024-11-20T14:01:30.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:32.719 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:40:32.719 xnvme_bdev : 5.00 49929.59 195.04 0.00 0.00 1277.59 795.79 5742.20 00:40:32.719 [2024-11-20T14:01:30.042Z] =================================================================================================================== 00:40:32.719 [2024-11-20T14:01:30.042Z] Total : 49929.59 195.04 0.00 0.00 1277.59 795.79 5742.20 00:40:34.093 14:01:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:40:34.093 14:01:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:40:34.093 14:01:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:40:34.094 14:01:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:40:34.094 14:01:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:34.094 { 00:40:34.094 "subsystems": [ 00:40:34.094 { 00:40:34.094 "subsystem": "bdev", 00:40:34.094 "config": [ 00:40:34.094 { 00:40:34.094 "params": { 00:40:34.094 "io_mechanism": "io_uring_cmd", 00:40:34.094 "conserve_cpu": true, 00:40:34.094 "filename": "/dev/ng0n1", 00:40:34.094 "name": "xnvme_bdev" 00:40:34.094 }, 00:40:34.094 "method": "bdev_xnvme_create" 00:40:34.094 }, 00:40:34.094 { 00:40:34.094 "method": "bdev_wait_for_examine" 00:40:34.094 } 00:40:34.094 ] 00:40:34.094 } 00:40:34.094 ] 00:40:34.094 } 00:40:34.094 [2024-11-20 14:01:31.222609] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:40:34.094 [2024-11-20 14:01:31.222788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73585 ] 00:40:34.352 [2024-11-20 14:01:31.424140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.352 [2024-11-20 14:01:31.622190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:34.920 Running I/O for 5 seconds... 00:40:36.786 44672.00 IOPS, 174.50 MiB/s [2024-11-20T14:01:35.485Z] 34049.50 IOPS, 133.01 MiB/s [2024-11-20T14:01:36.418Z] 28594.00 IOPS, 111.70 MiB/s [2024-11-20T14:01:37.351Z] 26514.75 IOPS, 103.57 MiB/s [2024-11-20T14:01:37.351Z] 23867.80 IOPS, 93.23 MiB/s 00:40:40.028 Latency(us) 00:40:40.028 [2024-11-20T14:01:37.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:40.028 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:40:40.028 xnvme_bdev : 5.02 23791.67 92.94 0.00 0.00 2682.15 54.61 41194.06 00:40:40.028 [2024-11-20T14:01:37.351Z] =================================================================================================================== 00:40:40.028 [2024-11-20T14:01:37.351Z] Total : 23791.67 92.94 0.00 0.00 2682.15 54.61 41194.06 00:40:41.400 14:01:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:40:41.400 14:01:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:40:41.400 14:01:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:40:41.400 14:01:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:40:41.400 14:01:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:41.400 { 00:40:41.400 "subsystems": [ 00:40:41.400 { 00:40:41.400 "subsystem": "bdev", 00:40:41.400 "config": [ 00:40:41.400 { 00:40:41.400 "params": { 00:40:41.400 "io_mechanism": "io_uring_cmd", 00:40:41.400 "conserve_cpu": true, 00:40:41.400 "filename": "/dev/ng0n1", 00:40:41.400 "name": "xnvme_bdev" 00:40:41.400 }, 00:40:41.400 "method": "bdev_xnvme_create" 00:40:41.400 }, 00:40:41.400 { 00:40:41.400 "method": "bdev_wait_for_examine" 00:40:41.400 } 00:40:41.400 ] 00:40:41.400 } 00:40:41.400 ] 00:40:41.400 } 00:40:41.400 [2024-11-20 14:01:38.491110] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:40:41.400 [2024-11-20 14:01:38.492354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73665 ] 00:40:41.400 [2024-11-20 14:01:38.673456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:41.657 [2024-11-20 14:01:38.822048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:42.249 Running I/O for 5 seconds... 00:40:44.116 22396.00 IOPS, 87.48 MiB/s [2024-11-20T14:01:42.379Z] 34942.00 IOPS, 136.49 MiB/s [2024-11-20T14:01:43.314Z] 46292.00 IOPS, 180.83 MiB/s [2024-11-20T14:01:44.688Z] 52095.00 IOPS, 203.50 MiB/s 00:40:47.365 Latency(us) 00:40:47.365 [2024-11-20T14:01:44.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:47.365 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:40:47.365 xnvme_bdev : 5.00 54777.86 213.98 0.00 0.00 1164.36 58.51 9362.29 00:40:47.365 [2024-11-20T14:01:44.688Z] =================================================================================================================== 00:40:47.365 [2024-11-20T14:01:44.688Z] Total : 54777.86 213.98 0.00 0.00 1164.36 58.51 9362.29 00:40:48.299 14:01:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:40:48.299 14:01:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:40:48.299 14:01:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:40:48.299 14:01:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:40:48.299 14:01:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:48.299 { 00:40:48.299 "subsystems": [ 00:40:48.299 { 00:40:48.299 "subsystem": "bdev", 00:40:48.299 "config": [ 00:40:48.299 { 00:40:48.299 "params": { 00:40:48.299 "io_mechanism": "io_uring_cmd", 00:40:48.299 "conserve_cpu": true, 00:40:48.299 "filename": "/dev/ng0n1", 00:40:48.299 "name": "xnvme_bdev" 00:40:48.299 }, 00:40:48.299 "method": "bdev_xnvme_create" 00:40:48.299 }, 00:40:48.299 { 00:40:48.299 "method": "bdev_wait_for_examine" 00:40:48.299 } 00:40:48.299 ] 00:40:48.299 } 00:40:48.299 ] 00:40:48.299 } 00:40:48.558 [2024-11-20 14:01:45.635748] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:40:48.558 [2024-11-20 14:01:45.635890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73750 ] 00:40:48.558 [2024-11-20 14:01:45.812287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:48.817 [2024-11-20 14:01:45.958637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:49.075 Running I/O for 5 seconds... 00:40:51.387 28914.00 IOPS, 112.95 MiB/s [2024-11-20T14:01:49.647Z] 27076.00 IOPS, 105.77 MiB/s [2024-11-20T14:01:50.583Z] 26513.67 IOPS, 103.57 MiB/s [2024-11-20T14:01:51.520Z] 26164.25 IOPS, 102.20 MiB/s [2024-11-20T14:01:51.520Z] 25850.80 IOPS, 100.98 MiB/s 00:40:54.197 Latency(us) 00:40:54.197 [2024-11-20T14:01:51.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:54.197 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:40:54.197 xnvme_bdev : 5.00 25844.31 100.95 0.00 0.00 2472.05 121.42 21845.33 00:40:54.197 [2024-11-20T14:01:51.520Z] =================================================================================================================== 00:40:54.197 [2024-11-20T14:01:51.520Z] Total : 25844.31 100.95 0.00 0.00 2472.05 121.42 21845.33 00:40:55.591 00:40:55.591 real 0m28.684s 00:40:55.591 user 0m15.641s 00:40:55.591 sys 0m10.256s 00:40:55.591 14:01:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:55.591 14:01:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:55.591 ************************************ 00:40:55.591 END TEST xnvme_bdevperf 00:40:55.591 ************************************ 00:40:55.591 14:01:52 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:40:55.591 14:01:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:55.591 14:01:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:55.591 14:01:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:55.591 ************************************ 00:40:55.591 START TEST xnvme_fio_plugin 00:40:55.591 ************************************ 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:40:55.591 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:55.591 { 00:40:55.591 "subsystems": [ 00:40:55.591 { 00:40:55.591 "subsystem": "bdev", 00:40:55.591 "config": [ 00:40:55.591 { 00:40:55.591 "params": { 00:40:55.591 "io_mechanism": "io_uring_cmd", 00:40:55.591 "conserve_cpu": true, 00:40:55.591 "filename": "/dev/ng0n1", 00:40:55.591 "name": "xnvme_bdev" 00:40:55.591 }, 00:40:55.591 "method": "bdev_xnvme_create" 00:40:55.591 }, 00:40:55.591 { 00:40:55.591 "method": "bdev_wait_for_examine" 00:40:55.591 } 00:40:55.591 ] 00:40:55.591 } 00:40:55.591 ] 00:40:55.591 } 00:40:55.592 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:55.592 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:55.592 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:40:55.592 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:40:55.592 14:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:40:55.877 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:40:55.877 fio-3.35 00:40:55.877 Starting 1 thread 00:41:02.447 00:41:02.447 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73874: Wed Nov 20 14:01:58 2024 00:41:02.447 read: IOPS=47.6k, BW=186MiB/s (195MB/s)(930MiB/5001msec) 00:41:02.447 slat (nsec): min=2350, max=68324, avg=3944.00, stdev=1104.11 00:41:02.447 clat (usec): min=141, max=2596, avg=1186.82, stdev=131.42 00:41:02.447 lat (usec): min=146, max=2615, avg=1190.76, stdev=131.65 00:41:02.447 clat percentiles (usec): 00:41:02.447 | 1.00th=[ 955], 5.00th=[ 1012], 10.00th=[ 1037], 20.00th=[ 1074], 00:41:02.447 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1205], 00:41:02.447 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1336], 95.00th=[ 1401], 00:41:02.447 | 99.00th=[ 1598], 99.50th=[ 1762], 99.90th=[ 2024], 99.95th=[ 2180], 00:41:02.447 | 99.99th=[ 2474] 00:41:02.447 bw ( KiB/s): min=180736, max=202240, per=99.81%, avg=190122.67, stdev=6854.88, samples=9 00:41:02.447 iops : min=45184, max=50560, avg=47530.67, stdev=1713.72, samples=9 00:41:02.447 lat (usec) : 250=0.01%, 1000=3.89% 00:41:02.447 lat (msec) : 2=95.99%, 4=0.12% 00:41:02.447 cpu : usr=36.16%, sys=61.44%, ctx=12, majf=0, minf=762 00:41:02.447 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:41:02.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.447 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:41:02.447 issued rwts: total=238150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.447 latency : target=0, window=0, percentile=100.00%, depth=64 00:41:02.447 00:41:02.447 Run status group 0 (all jobs): 00:41:02.447 READ: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=930MiB (975MB), run=5001-5001msec 00:41:03.383 ----------------------------------------------------- 00:41:03.383 Suppressions used: 00:41:03.383 count bytes template 00:41:03.383 1 11 /usr/src/fio/parse.c 00:41:03.383 1 8 libtcmalloc_minimal.so 00:41:03.383 1 904 libcrypto.so 00:41:03.383 ----------------------------------------------------- 00:41:03.383 00:41:03.383 14:02:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:41:03.383 14:02:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:41:03.383 14:02:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:41:03.383 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:41:03.383 14:02:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:41:03.383 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:03.383 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:41:03.383 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:03.383 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:03.383 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:41:03.383 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:41:03.383 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:03.384 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:03.384 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:41:03.384 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:03.384 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:41:03.384 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:03.384 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:03.384 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:41:03.384 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:41:03.384 14:02:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:41:03.384 { 00:41:03.384 "subsystems": [ 00:41:03.384 { 00:41:03.384 "subsystem": "bdev", 00:41:03.384 "config": [ 00:41:03.384 { 00:41:03.384 "params": { 00:41:03.384 "io_mechanism": "io_uring_cmd", 00:41:03.384 "conserve_cpu": true, 00:41:03.384 "filename": "/dev/ng0n1", 00:41:03.384 "name": "xnvme_bdev" 00:41:03.384 }, 00:41:03.384 "method": "bdev_xnvme_create" 00:41:03.384 }, 00:41:03.384 { 00:41:03.384 "method": "bdev_wait_for_examine" 00:41:03.384 } 00:41:03.384 ] 00:41:03.384 } 00:41:03.384 ] 00:41:03.384 } 00:41:03.384 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:41:03.384 fio-3.35 00:41:03.384 Starting 1 thread 00:41:09.950 00:41:09.950 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73966: Wed Nov 20 14:02:06 2024 00:41:09.950 write: IOPS=47.5k, BW=185MiB/s (194MB/s)(927MiB/5001msec); 0 zone resets 00:41:09.950 slat (nsec): min=3495, max=93159, avg=4492.57, stdev=1587.71 00:41:09.950 clat (usec): min=234, max=3615, avg=1172.94, stdev=156.54 00:41:09.950 lat (usec): min=238, max=3650, avg=1177.43, stdev=157.20 00:41:09.950 clat percentiles (usec): 00:41:09.950 | 1.00th=[ 947], 5.00th=[ 988], 10.00th=[ 1012], 20.00th=[ 1057], 00:41:09.950 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:41:09.950 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1319], 95.00th=[ 1434], 00:41:09.950 | 99.00th=[ 1795], 99.50th=[ 1893], 99.90th=[ 2114], 99.95th=[ 2311], 00:41:09.950 | 99.99th=[ 3359] 00:41:09.950 bw ( KiB/s): min=183808, max=193536, per=99.74%, avg=189382.22, stdev=2867.59, samples=9 00:41:09.950 iops : min=45952, max=48384, avg=47345.56, stdev=716.90, samples=9 00:41:09.950 lat (usec) : 250=0.01%, 500=0.01%, 1000=7.45% 00:41:09.950 lat (msec) : 2=92.37%, 4=0.17% 00:41:09.950 cpu : usr=36.68%, sys=60.68%, ctx=13, majf=0, minf=762 00:41:09.950 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:41:09.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.950 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:41:09.950 issued rwts: total=0,237388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:41:09.950 00:41:09.950 Run status group 0 (all jobs): 00:41:09.950 WRITE: bw=185MiB/s (194MB/s), 185MiB/s-185MiB/s (194MB/s-194MB/s), io=927MiB (972MB), run=5001-5001msec 00:41:10.889 ----------------------------------------------------- 00:41:10.889 Suppressions used: 00:41:10.889 count bytes template 00:41:10.889 1 11 /usr/src/fio/parse.c 00:41:10.889 1 8 libtcmalloc_minimal.so 00:41:10.889 1 904 libcrypto.so 00:41:10.889 ----------------------------------------------------- 00:41:10.889 00:41:10.889 00:41:10.889 real 0m15.168s 00:41:10.889 user 0m7.768s 00:41:10.889 sys 0m6.858s 00:41:10.889 14:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:10.889 14:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:41:10.889 ************************************ 00:41:10.889 END TEST xnvme_fio_plugin 00:41:10.889 ************************************ 00:41:10.889 14:02:08 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73423 00:41:10.889 14:02:08 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73423 ']' 00:41:10.889 14:02:08 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73423 00:41:10.889 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73423) - No such process 00:41:10.889 Process with pid 73423 is not found 00:41:10.889 14:02:08 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73423 is not found' 00:41:10.889 00:41:10.889 real 3m59.353s 00:41:10.889 user 2m5.454s 00:41:10.889 sys 1m36.339s 00:41:10.889 14:02:08 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:10.889 14:02:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:10.889 ************************************ 00:41:10.889 END TEST nvme_xnvme 00:41:10.889 ************************************ 00:41:10.889 14:02:08 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:41:10.889 14:02:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:10.889 14:02:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:10.889 14:02:08 -- common/autotest_common.sh@10 -- # set +x 00:41:10.889 ************************************ 00:41:10.889 START TEST blockdev_xnvme 00:41:10.889 ************************************ 00:41:10.889 14:02:08 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:41:10.889 * Looking for test storage... 00:41:10.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:41:10.889 14:02:08 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:10.889 14:02:08 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:41:10.889 14:02:08 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:11.148 14:02:08 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:11.148 14:02:08 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:41:11.148 14:02:08 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:11.148 14:02:08 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:11.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.148 --rc genhtml_branch_coverage=1 00:41:11.148 --rc genhtml_function_coverage=1 00:41:11.148 --rc genhtml_legend=1 00:41:11.148 --rc geninfo_all_blocks=1 00:41:11.148 --rc geninfo_unexecuted_blocks=1 00:41:11.148 00:41:11.148 ' 00:41:11.148 14:02:08 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:11.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.148 --rc genhtml_branch_coverage=1 00:41:11.148 --rc genhtml_function_coverage=1 00:41:11.148 --rc genhtml_legend=1 00:41:11.148 --rc geninfo_all_blocks=1 00:41:11.148 --rc geninfo_unexecuted_blocks=1 00:41:11.148 00:41:11.148 ' 00:41:11.148 14:02:08 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:11.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.148 --rc genhtml_branch_coverage=1 00:41:11.148 --rc genhtml_function_coverage=1 00:41:11.148 --rc genhtml_legend=1 00:41:11.148 --rc geninfo_all_blocks=1 00:41:11.148 --rc geninfo_unexecuted_blocks=1 00:41:11.148 00:41:11.148 ' 00:41:11.148 14:02:08 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:11.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.148 --rc genhtml_branch_coverage=1 00:41:11.148 --rc genhtml_function_coverage=1 00:41:11.148 --rc genhtml_legend=1 00:41:11.148 --rc geninfo_all_blocks=1 00:41:11.148 --rc geninfo_unexecuted_blocks=1 00:41:11.148 00:41:11.148 ' 00:41:11.148 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:41:11.148 14:02:08 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:41:11.148 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:41:11.148 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:41:11.148 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:41:11.148 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:41:11.148 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:41:11.148 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:41:11.148 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:41:11.148 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:41:11.148 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74107 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:41:11.149 14:02:08 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74107 00:41:11.149 14:02:08 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74107 ']' 00:41:11.149 14:02:08 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:11.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:11.149 14:02:08 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:11.149 14:02:08 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:11.149 14:02:08 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:11.149 14:02:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:11.149 [2024-11-20 14:02:08.350561] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:41:11.149 [2024-11-20 14:02:08.350692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74107 ] 00:41:11.407 [2024-11-20 14:02:08.522361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:11.407 [2024-11-20 14:02:08.633441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:12.342 14:02:09 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:12.342 14:02:09 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:41:12.342 14:02:09 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:41:12.342 14:02:09 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:41:12.342 14:02:09 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:41:12.342 14:02:09 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:41:12.342 14:02:09 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:41:12.908 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:13.477 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:41:13.477 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:41:13.477 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:41:13.477 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.477 14:02:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:13.477 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring -c' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:41:13.737 nvme0n1 00:41:13.737 nvme1n1 00:41:13.737 nvme2n1 00:41:13.737 nvme2n2 00:41:13.737 nvme2n3 00:41:13.737 nvme3n1 00:41:13.737 14:02:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.737 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:41:13.737 14:02:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.737 14:02:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:13.737 14:02:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.737 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:41:13.737 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:41:13.737 14:02:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.737 14:02:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:13.737 14:02:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.738 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:41:13.738 14:02:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.738 14:02:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:13.738 14:02:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.738 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:41:13.738 14:02:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.738 14:02:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:13.738 14:02:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.738 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:41:13.738 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:41:13.738 14:02:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.738 14:02:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:13.738 14:02:10 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:41:13.738 14:02:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.738 14:02:11 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:41:13.738 14:02:11 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:41:13.738 14:02:11 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "74c4ee17-9cd2-41e8-b76d-b61bdd0ddb32"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "74c4ee17-9cd2-41e8-b76d-b61bdd0ddb32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "e4db283b-4338-4101-9ee1-777d77d030be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e4db283b-4338-4101-9ee1-777d77d030be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "6d5a6323-4094-46fd-bdb1-8c12c0eecf06"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6d5a6323-4094-46fd-bdb1-8c12c0eecf06",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "1dee76d6-7869-47b1-9d5f-ae7c45d5d158"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1dee76d6-7869-47b1-9d5f-ae7c45d5d158",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "b2da9514-dcd8-4eba-8468-9068cf0ac18c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b2da9514-dcd8-4eba-8468-9068cf0ac18c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "dd0701b6-5676-4b87-be7f-3b122d97a11f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dd0701b6-5676-4b87-be7f-3b122d97a11f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:41:13.738 14:02:11 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:41:13.738 14:02:11 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:41:13.738 14:02:11 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:41:13.738 14:02:11 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 74107 00:41:13.738 14:02:11 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74107 ']' 00:41:13.738 14:02:11 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74107 00:41:13.738 14:02:11 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:41:13.738 14:02:11 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:13.738 14:02:11 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74107 00:41:13.997 14:02:11 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:13.998 killing process with pid 74107 00:41:13.998 14:02:11 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:13.998 14:02:11 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74107' 00:41:13.998 14:02:11 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74107 00:41:13.998 14:02:11 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74107 00:41:16.564 14:02:13 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:41:16.564 14:02:13 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:41:16.564 14:02:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:41:16.564 14:02:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:16.564 14:02:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:16.564 ************************************ 00:41:16.564 START TEST bdev_hello_world 00:41:16.564 ************************************ 00:41:16.564 14:02:13 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:41:16.564 [2024-11-20 14:02:13.676890] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:41:16.564 [2024-11-20 14:02:13.677065] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74398 ] 00:41:16.564 [2024-11-20 14:02:13.864111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:16.823 [2024-11-20 14:02:13.981618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:17.390 [2024-11-20 14:02:14.435365] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:41:17.390 [2024-11-20 14:02:14.435424] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:41:17.390 [2024-11-20 14:02:14.435444] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:41:17.390 [2024-11-20 14:02:14.437687] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:41:17.390 [2024-11-20 14:02:14.438274] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:41:17.390 [2024-11-20 14:02:14.438322] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:41:17.390 [2024-11-20 14:02:14.438654] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:41:17.390 00:41:17.390 [2024-11-20 14:02:14.438686] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:41:18.326 00:41:18.326 real 0m2.043s 00:41:18.326 user 0m1.655s 00:41:18.326 sys 0m0.271s 00:41:18.326 ************************************ 00:41:18.326 END TEST bdev_hello_world 00:41:18.326 ************************************ 00:41:18.326 14:02:15 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:18.326 14:02:15 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:41:18.584 14:02:15 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:41:18.584 14:02:15 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:18.584 14:02:15 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:18.584 14:02:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:18.584 ************************************ 00:41:18.584 START TEST bdev_bounds 00:41:18.584 ************************************ 00:41:18.584 14:02:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:41:18.584 14:02:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74439 00:41:18.585 14:02:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:41:18.585 Process bdevio pid: 74439 00:41:18.585 14:02:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74439' 00:41:18.585 14:02:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74439 00:41:18.585 14:02:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74439 ']' 00:41:18.585 14:02:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:18.585 14:02:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:41:18.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:18.585 14:02:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:18.585 14:02:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:18.585 14:02:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:18.585 14:02:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:41:18.585 [2024-11-20 14:02:15.775906] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:41:18.585 [2024-11-20 14:02:15.776102] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74439 ] 00:41:18.842 [2024-11-20 14:02:15.958385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:18.842 [2024-11-20 14:02:16.081623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:18.842 [2024-11-20 14:02:16.081717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.842 [2024-11-20 14:02:16.081717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:19.410 14:02:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:19.410 14:02:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:41:19.410 14:02:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:41:19.668 I/O targets: 00:41:19.668 nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:41:19.668 nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:41:19.668 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:41:19.668 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:41:19.669 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:41:19.669 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:41:19.669 00:41:19.669 00:41:19.669 CUnit - A unit testing framework for C - Version 2.1-3 00:41:19.669 http://cunit.sourceforge.net/ 00:41:19.669 00:41:19.669 00:41:19.669 Suite: bdevio tests on: nvme3n1 00:41:19.669 Test: blockdev write read block ...passed 00:41:19.669 Test: blockdev write zeroes read block ...passed 00:41:19.669 Test: blockdev write zeroes read no split ...passed 00:41:19.669 Test: blockdev write zeroes read split ...passed 00:41:19.669 Test: blockdev write zeroes read split partial ...passed 00:41:19.669 Test: blockdev reset ...passed 00:41:19.669 Test: blockdev write read 8 blocks ...passed 00:41:19.669 Test: blockdev write read size > 128k ...passed 00:41:19.669 Test: blockdev write read invalid size ...passed 00:41:19.669 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:19.669 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:19.669 Test: blockdev write read max offset ...passed 00:41:19.669 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:19.669 Test: blockdev writev readv 8 blocks ...passed 00:41:19.669 Test: blockdev writev readv 30 x 1block ...passed 00:41:19.669 Test: blockdev writev readv block ...passed 00:41:19.669 Test: blockdev writev readv size > 128k ...passed 00:41:19.669 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:19.669 Test: blockdev comparev and writev ...passed 00:41:19.669 Test: blockdev nvme passthru rw ...passed 00:41:19.669 Test: blockdev nvme passthru vendor specific ...passed 00:41:19.669 Test: blockdev nvme admin passthru ...passed 00:41:19.669 Test: blockdev copy ...passed 00:41:19.669 Suite: bdevio tests on: nvme2n3 00:41:19.669 Test: blockdev write read block ...passed 00:41:19.669 Test: blockdev write zeroes read block ...passed 00:41:19.669 Test: blockdev write zeroes read no split ...passed 00:41:19.669 Test: blockdev write zeroes read split ...passed 00:41:19.669 Test: blockdev write zeroes read split partial ...passed 00:41:19.669 Test: blockdev reset ...passed 00:41:19.669 Test: blockdev write read 8 blocks ...passed 00:41:19.669 Test: blockdev write read size > 128k ...passed 00:41:19.669 Test: blockdev write read invalid size ...passed 00:41:19.669 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:19.669 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:19.669 Test: blockdev write read max offset ...passed 00:41:19.669 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:19.669 Test: blockdev writev readv 8 blocks ...passed 00:41:19.669 Test: blockdev writev readv 30 x 1block ...passed 00:41:19.669 Test: blockdev writev readv block ...passed 00:41:19.669 Test: blockdev writev readv size > 128k ...passed 00:41:19.928 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:19.928 Test: blockdev comparev and writev ...passed 00:41:19.928 Test: blockdev nvme passthru rw ...passed 00:41:19.928 Test: blockdev nvme passthru vendor specific ...passed 00:41:19.928 Test: blockdev nvme admin passthru ...passed 00:41:19.928 Test: blockdev copy ...passed 00:41:19.928 Suite: bdevio tests on: nvme2n2 00:41:19.928 Test: blockdev write read block ...passed 00:41:19.928 Test: blockdev write zeroes read block ...passed 00:41:19.928 Test: blockdev write zeroes read no split ...passed 00:41:19.928 Test: blockdev write zeroes read split ...passed 00:41:19.928 Test: blockdev write zeroes read split partial ...passed 00:41:19.928 Test: blockdev reset ...passed 00:41:19.928 Test: blockdev write read 8 blocks ...passed 00:41:19.928 Test: blockdev write read size > 128k ...passed 00:41:19.928 Test: blockdev write read invalid size ...passed 00:41:19.928 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:19.928 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:19.928 Test: blockdev write read max offset ...passed 00:41:19.928 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:19.928 Test: blockdev writev readv 8 blocks ...passed 00:41:19.928 Test: blockdev writev readv 30 x 1block ...passed 00:41:19.928 Test: blockdev writev readv block ...passed 00:41:19.928 Test: blockdev writev readv size > 128k ...passed 00:41:19.928 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:19.928 Test: blockdev comparev and writev ...passed 00:41:19.928 Test: blockdev nvme passthru rw ...passed 00:41:19.928 Test: blockdev nvme passthru vendor specific ...passed 00:41:19.928 Test: blockdev nvme admin passthru ...passed 00:41:19.928 Test: blockdev copy ...passed 00:41:19.928 Suite: bdevio tests on: nvme2n1 00:41:19.928 Test: blockdev write read block ...passed 00:41:19.928 Test: blockdev write zeroes read block ...passed 00:41:19.928 Test: blockdev write zeroes read no split ...passed 00:41:19.928 Test: blockdev write zeroes read split ...passed 00:41:19.928 Test: blockdev write zeroes read split partial ...passed 00:41:19.928 Test: blockdev reset ...passed 00:41:19.928 Test: blockdev write read 8 blocks ...passed 00:41:19.928 Test: blockdev write read size > 128k ...passed 00:41:19.928 Test: blockdev write read invalid size ...passed 00:41:19.928 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:19.928 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:19.928 Test: blockdev write read max offset ...passed 00:41:19.928 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:19.928 Test: blockdev writev readv 8 blocks ...passed 00:41:19.928 Test: blockdev writev readv 30 x 1block ...passed 00:41:19.928 Test: blockdev writev readv block ...passed 00:41:19.928 Test: blockdev writev readv size > 128k ...passed 00:41:19.928 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:19.928 Test: blockdev comparev and writev ...passed 00:41:19.928 Test: blockdev nvme passthru rw ...passed 00:41:19.928 Test: blockdev nvme passthru vendor specific ...passed 00:41:19.928 Test: blockdev nvme admin passthru ...passed 00:41:19.928 Test: blockdev copy ...passed 00:41:19.928 Suite: bdevio tests on: nvme1n1 00:41:19.928 Test: blockdev write read block ...passed 00:41:19.928 Test: blockdev write zeroes read block ...passed 00:41:19.928 Test: blockdev write zeroes read no split ...passed 00:41:19.928 Test: blockdev write zeroes read split ...passed 00:41:19.928 Test: blockdev write zeroes read split partial ...passed 00:41:19.928 Test: blockdev reset ...passed 00:41:19.928 Test: blockdev write read 8 blocks ...passed 00:41:19.928 Test: blockdev write read size > 128k ...passed 00:41:19.928 Test: blockdev write read invalid size ...passed 00:41:19.928 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:19.928 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:19.928 Test: blockdev write read max offset ...passed 00:41:19.928 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:19.928 Test: blockdev writev readv 8 blocks ...passed 00:41:19.928 Test: blockdev writev readv 30 x 1block ...passed 00:41:19.928 Test: blockdev writev readv block ...passed 00:41:19.928 Test: blockdev writev readv size > 128k ...passed 00:41:19.928 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:19.928 Test: blockdev comparev and writev ...passed 00:41:19.928 Test: blockdev nvme passthru rw ...passed 00:41:19.928 Test: blockdev nvme passthru vendor specific ...passed 00:41:19.928 Test: blockdev nvme admin passthru ...passed 00:41:19.928 Test: blockdev copy ...passed 00:41:19.928 Suite: bdevio tests on: nvme0n1 00:41:19.928 Test: blockdev write read block ...passed 00:41:19.928 Test: blockdev write zeroes read block ...passed 00:41:20.187 Test: blockdev write zeroes read no split ...passed 00:41:20.187 Test: blockdev write zeroes read split ...passed 00:41:20.187 Test: blockdev write zeroes read split partial ...passed 00:41:20.187 Test: blockdev reset ...passed 00:41:20.187 Test: blockdev write read 8 blocks ...passed 00:41:20.187 Test: blockdev write read size > 128k ...passed 00:41:20.187 Test: blockdev write read invalid size ...passed 00:41:20.187 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:20.187 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:20.187 Test: blockdev write read max offset ...passed 00:41:20.187 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:20.187 Test: blockdev writev readv 8 blocks ...passed 00:41:20.187 Test: blockdev writev readv 30 x 1block ...passed 00:41:20.187 Test: blockdev writev readv block ...passed 00:41:20.187 Test: blockdev writev readv size > 128k ...passed 00:41:20.187 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:20.187 Test: blockdev comparev and writev ...passed 00:41:20.187 Test: blockdev nvme passthru rw ...passed 00:41:20.187 Test: blockdev nvme passthru vendor specific ...passed 00:41:20.187 Test: blockdev nvme admin passthru ...passed 00:41:20.187 Test: blockdev copy ...passed 00:41:20.187 00:41:20.187 Run Summary: Type Total Ran Passed Failed Inactive 00:41:20.187 suites 6 6 n/a 0 0 00:41:20.187 tests 138 138 138 0 0 00:41:20.187 asserts 780 780 780 0 n/a 00:41:20.187 00:41:20.187 Elapsed time = 1.405 seconds 00:41:20.187 0 00:41:20.187 14:02:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74439 00:41:20.187 14:02:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74439 ']' 00:41:20.187 14:02:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74439 00:41:20.187 14:02:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:41:20.187 14:02:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:20.187 14:02:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74439 00:41:20.187 14:02:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:20.187 killing process with pid 74439 00:41:20.187 14:02:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:20.187 14:02:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74439' 00:41:20.187 14:02:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74439 00:41:20.187 14:02:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74439 00:41:21.564 14:02:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:41:21.564 00:41:21.564 real 0m2.922s 00:41:21.564 user 0m7.352s 00:41:21.564 sys 0m0.435s 00:41:21.564 14:02:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:21.564 ************************************ 00:41:21.564 END TEST bdev_bounds 00:41:21.564 ************************************ 00:41:21.564 14:02:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:41:21.564 14:02:18 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:41:21.564 14:02:18 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:41:21.564 14:02:18 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:21.564 14:02:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:21.564 ************************************ 00:41:21.564 START TEST bdev_nbd 00:41:21.564 ************************************ 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:41:21.564 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74505 00:41:21.565 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:41:21.565 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74505 /var/tmp/spdk-nbd.sock 00:41:21.565 14:02:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74505 ']' 00:41:21.565 14:02:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:41:21.565 14:02:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:41:21.565 14:02:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:21.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:41:21.565 14:02:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:41:21.565 14:02:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:21.565 14:02:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:41:21.565 [2024-11-20 14:02:18.767038] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:41:21.565 [2024-11-20 14:02:18.767213] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:21.823 [2024-11-20 14:02:18.960117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:21.823 [2024-11-20 14:02:19.077159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:22.389 14:02:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:22.389 14:02:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:41:22.390 14:02:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:41:22.390 14:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:22.390 14:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:41:22.390 14:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:41:22.390 14:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:41:22.390 14:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:22.390 14:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:41:22.390 14:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:41:22.390 14:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:41:22.390 14:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:41:22.390 14:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:41:22.390 14:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:41:22.390 14:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:22.957 1+0 records in 00:41:22.957 1+0 records out 00:41:22.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667253 s, 6.1 MB/s 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:22.957 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:41:23.216 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:41:23.216 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:23.216 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:23.217 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:23.217 1+0 records in 00:41:23.217 1+0 records out 00:41:23.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000839465 s, 4.9 MB/s 00:41:23.217 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:23.217 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:41:23.217 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:23.217 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:23.217 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:41:23.217 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:41:23.217 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:41:23.217 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:23.475 1+0 records in 00:41:23.475 1+0 records out 00:41:23.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047263 s, 8.7 MB/s 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:41:23.475 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:23.734 1+0 records in 00:41:23.734 1+0 records out 00:41:23.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611515 s, 6.7 MB/s 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:41:23.734 14:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:41:23.993 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:41:23.993 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:41:23.993 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:41:23.993 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:41:23.993 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:41:23.993 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:23.993 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:23.993 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:41:23.993 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:41:23.993 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:23.993 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:23.993 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:23.993 1+0 records in 00:41:23.993 1+0 records out 00:41:23.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765246 s, 5.4 MB/s 00:41:23.993 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:24.252 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:41:24.252 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:24.252 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:24.252 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:41:24.252 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:41:24.252 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:41:24.252 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:24.511 1+0 records in 00:41:24.511 1+0 records out 00:41:24.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676719 s, 6.1 MB/s 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:41:24.511 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:24.771 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:41:24.771 { 00:41:24.771 "nbd_device": "/dev/nbd0", 00:41:24.771 "bdev_name": "nvme0n1" 00:41:24.771 }, 00:41:24.771 { 00:41:24.771 "nbd_device": "/dev/nbd1", 00:41:24.771 "bdev_name": "nvme1n1" 00:41:24.771 }, 00:41:24.771 { 00:41:24.771 "nbd_device": "/dev/nbd2", 00:41:24.771 "bdev_name": "nvme2n1" 00:41:24.771 }, 00:41:24.771 { 00:41:24.771 "nbd_device": "/dev/nbd3", 00:41:24.771 "bdev_name": "nvme2n2" 00:41:24.771 }, 00:41:24.771 { 00:41:24.771 "nbd_device": "/dev/nbd4", 00:41:24.771 "bdev_name": "nvme2n3" 00:41:24.771 }, 00:41:24.771 { 00:41:24.771 "nbd_device": "/dev/nbd5", 00:41:24.771 "bdev_name": "nvme3n1" 00:41:24.771 } 00:41:24.771 ]' 00:41:24.771 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:41:24.771 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:41:24.771 { 00:41:24.771 "nbd_device": "/dev/nbd0", 00:41:24.771 "bdev_name": "nvme0n1" 00:41:24.771 }, 00:41:24.771 { 00:41:24.771 "nbd_device": "/dev/nbd1", 00:41:24.771 "bdev_name": "nvme1n1" 00:41:24.771 }, 00:41:24.771 { 00:41:24.771 "nbd_device": "/dev/nbd2", 00:41:24.771 "bdev_name": "nvme2n1" 00:41:24.771 }, 00:41:24.771 { 00:41:24.771 "nbd_device": "/dev/nbd3", 00:41:24.771 "bdev_name": "nvme2n2" 00:41:24.771 }, 00:41:24.771 { 00:41:24.771 "nbd_device": "/dev/nbd4", 00:41:24.771 "bdev_name": "nvme2n3" 00:41:24.771 }, 00:41:24.771 { 00:41:24.771 "nbd_device": "/dev/nbd5", 00:41:24.771 "bdev_name": "nvme3n1" 00:41:24.771 } 00:41:24.771 ]' 00:41:24.771 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:41:24.771 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:41:24.771 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:24.771 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:41:24.771 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:24.771 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:41:24.771 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:24.771 14:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:25.031 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:25.031 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:25.031 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:25.031 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:25.031 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:25.031 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:25.031 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:25.031 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:25.031 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:25.031 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:41:25.290 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:25.290 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:25.290 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:25.290 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:25.290 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:25.290 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:25.290 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:25.290 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:25.290 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:25.290 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:41:25.549 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:41:25.549 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:41:25.549 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:41:25.549 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:25.549 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:25.549 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:41:25.549 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:25.549 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:25.549 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:25.549 14:02:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:41:25.809 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:41:25.809 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:41:25.809 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:41:25.809 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:25.809 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:25.809 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:41:25.809 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:25.809 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:25.809 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:25.809 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:41:26.068 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:41:26.068 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:41:26.068 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:41:26.068 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:26.068 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:26.068 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:41:26.068 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:26.068 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:26.068 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:26.068 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:41:26.327 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:41:26.327 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:41:26.327 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:41:26.327 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:26.327 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:26.327 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:41:26.327 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:26.327 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:26.327 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:26.327 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:26.327 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:41:26.586 14:02:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:41:26.845 /dev/nbd0 00:41:26.845 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:26.845 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:26.845 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:26.845 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:41:26.845 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:26.845 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:26.845 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:26.845 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:41:26.845 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:26.845 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:26.845 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:26.845 1+0 records in 00:41:26.845 1+0 records out 00:41:26.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734656 s, 5.6 MB/s 00:41:26.845 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:27.104 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:41:27.104 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:27.104 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:27.104 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:41:27.104 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:27.104 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:41:27.104 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:41:27.363 /dev/nbd1 00:41:27.363 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:27.363 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:27.363 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:41:27.363 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:41:27.363 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:27.363 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:27.363 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:41:27.363 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:41:27.363 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:27.363 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:27.363 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:27.363 1+0 records in 00:41:27.363 1+0 records out 00:41:27.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605461 s, 6.8 MB/s 00:41:27.364 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:27.364 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:41:27.364 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:27.364 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:27.364 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:41:27.364 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:27.364 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:41:27.364 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:41:27.623 /dev/nbd10 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:27.623 1+0 records in 00:41:27.623 1+0 records out 00:41:27.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524257 s, 7.8 MB/s 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:41:27.623 14:02:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:41:27.882 /dev/nbd11 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:27.882 1+0 records in 00:41:27.882 1+0 records out 00:41:27.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690898 s, 5.9 MB/s 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:41:27.882 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:41:28.142 /dev/nbd12 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:28.142 1+0 records in 00:41:28.142 1+0 records out 00:41:28.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616068 s, 6.6 MB/s 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:41:28.142 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:41:28.401 /dev/nbd13 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:28.659 1+0 records in 00:41:28.659 1+0 records out 00:41:28.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000779961 s, 5.3 MB/s 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:28.659 14:02:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:28.918 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:41:28.918 { 00:41:28.918 "nbd_device": "/dev/nbd0", 00:41:28.918 "bdev_name": "nvme0n1" 00:41:28.918 }, 00:41:28.918 { 00:41:28.918 "nbd_device": "/dev/nbd1", 00:41:28.918 "bdev_name": "nvme1n1" 00:41:28.918 }, 00:41:28.918 { 00:41:28.918 "nbd_device": "/dev/nbd10", 00:41:28.918 "bdev_name": "nvme2n1" 00:41:28.918 }, 00:41:28.918 { 00:41:28.918 "nbd_device": "/dev/nbd11", 00:41:28.918 "bdev_name": "nvme2n2" 00:41:28.918 }, 00:41:28.918 { 00:41:28.918 "nbd_device": "/dev/nbd12", 00:41:28.918 "bdev_name": "nvme2n3" 00:41:28.918 }, 00:41:28.918 { 00:41:28.918 "nbd_device": "/dev/nbd13", 00:41:28.918 "bdev_name": "nvme3n1" 00:41:28.918 } 00:41:28.918 ]' 00:41:28.918 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:41:28.918 { 00:41:28.918 "nbd_device": "/dev/nbd0", 00:41:28.918 "bdev_name": "nvme0n1" 00:41:28.919 }, 00:41:28.919 { 00:41:28.919 "nbd_device": "/dev/nbd1", 00:41:28.919 "bdev_name": "nvme1n1" 00:41:28.919 }, 00:41:28.919 { 00:41:28.919 "nbd_device": "/dev/nbd10", 00:41:28.919 "bdev_name": "nvme2n1" 00:41:28.919 }, 00:41:28.919 { 00:41:28.919 "nbd_device": "/dev/nbd11", 00:41:28.919 "bdev_name": "nvme2n2" 00:41:28.919 }, 00:41:28.919 { 00:41:28.919 "nbd_device": "/dev/nbd12", 00:41:28.919 "bdev_name": "nvme2n3" 00:41:28.919 }, 00:41:28.919 { 00:41:28.919 "nbd_device": "/dev/nbd13", 00:41:28.919 "bdev_name": "nvme3n1" 00:41:28.919 } 00:41:28.919 ]' 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:41:28.919 /dev/nbd1 00:41:28.919 /dev/nbd10 00:41:28.919 /dev/nbd11 00:41:28.919 /dev/nbd12 00:41:28.919 /dev/nbd13' 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:41:28.919 /dev/nbd1 00:41:28.919 /dev/nbd10 00:41:28.919 /dev/nbd11 00:41:28.919 /dev/nbd12 00:41:28.919 /dev/nbd13' 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:41:28.919 256+0 records in 00:41:28.919 256+0 records out 00:41:28.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109057 s, 96.1 MB/s 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:41:28.919 256+0 records in 00:41:28.919 256+0 records out 00:41:28.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140002 s, 7.5 MB/s 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:28.919 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:41:29.178 256+0 records in 00:41:29.178 256+0 records out 00:41:29.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124834 s, 8.4 MB/s 00:41:29.178 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:29.178 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:41:29.474 256+0 records in 00:41:29.474 256+0 records out 00:41:29.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137023 s, 7.7 MB/s 00:41:29.474 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:29.474 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:41:29.474 256+0 records in 00:41:29.474 256+0 records out 00:41:29.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124292 s, 8.4 MB/s 00:41:29.474 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:29.474 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:41:29.474 256+0 records in 00:41:29.474 256+0 records out 00:41:29.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122901 s, 8.5 MB/s 00:41:29.474 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:29.474 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:41:29.732 256+0 records in 00:41:29.732 256+0 records out 00:41:29.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127944 s, 8.2 MB/s 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:41:29.732 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:29.733 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:41:29.733 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:29.733 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:41:29.733 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:41:29.733 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:41:29.733 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:29.733 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:41:29.733 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:29.733 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:41:29.733 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:29.733 14:02:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:29.991 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:29.991 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:29.991 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:29.991 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:29.991 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:29.991 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:29.991 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:29.991 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:29.991 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:29.991 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:41:30.250 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:30.250 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:30.250 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:30.250 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:30.250 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:30.250 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:30.508 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:30.508 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:30.508 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:30.508 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:41:30.766 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:41:30.766 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:41:30.766 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:41:30.766 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:30.766 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:30.766 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:41:30.766 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:30.766 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:30.766 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:30.766 14:02:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:41:31.025 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:41:31.025 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:41:31.025 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:41:31.025 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:31.025 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:31.025 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:41:31.025 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:31.025 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:31.025 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:31.025 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:41:31.283 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:41:31.283 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:41:31.283 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:41:31.283 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:31.283 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:31.283 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:41:31.283 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:31.283 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:31.283 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:31.284 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:41:31.284 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:41:31.284 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:41:31.284 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:41:31.284 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:31.284 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:31.284 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:41:31.284 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:31.284 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:31.284 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:31.284 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:31.284 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:31.543 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:41:31.543 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:31.543 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:41:31.802 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:41:31.802 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:41:31.802 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:31.802 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:41:31.802 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:41:31.802 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:41:31.802 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:41:31.802 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:41:31.802 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:41:31.802 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:41:31.802 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:31.802 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:41:31.802 14:02:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:41:32.060 malloc_lvol_verify 00:41:32.060 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:41:32.318 32414f1e-cce4-461b-94bb-0aefb78b9a59 00:41:32.318 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:41:32.318 be352226-c0e0-4614-9c40-1a7ff2e544a7 00:41:32.318 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:41:32.578 /dev/nbd0 00:41:32.578 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:41:32.578 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:41:32.578 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:41:32.578 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:41:32.578 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:41:32.578 mke2fs 1.47.0 (5-Feb-2023) 00:41:32.578 Discarding device blocks: 0/4096 done 00:41:32.578 Creating filesystem with 4096 1k blocks and 1024 inodes 00:41:32.578 00:41:32.578 Allocating group tables: 0/1 done 00:41:32.578 Writing inode tables: 0/1 done 00:41:32.578 Creating journal (1024 blocks): done 00:41:32.578 Writing superblocks and filesystem accounting information: 0/1 done 00:41:32.578 00:41:32.578 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:41:32.578 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:32.578 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:32.578 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:32.578 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:41:32.578 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:32.578 14:02:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74505 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74505 ']' 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74505 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74505 00:41:32.837 killing process with pid 74505 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74505' 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74505 00:41:32.837 14:02:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74505 00:41:34.216 ************************************ 00:41:34.216 END TEST bdev_nbd 00:41:34.216 ************************************ 00:41:34.216 14:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:41:34.216 00:41:34.216 real 0m12.632s 00:41:34.216 user 0m16.540s 00:41:34.216 sys 0m5.321s 00:41:34.216 14:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:34.216 14:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:41:34.216 14:02:31 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:41:34.216 14:02:31 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:41:34.216 14:02:31 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:41:34.216 14:02:31 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:41:34.216 14:02:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:34.216 14:02:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:34.216 14:02:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:34.216 ************************************ 00:41:34.216 START TEST bdev_fio 00:41:34.216 ************************************ 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:41:34.216 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:41:34.216 ************************************ 00:41:34.216 START TEST bdev_fio_rw_verify 00:41:34.216 ************************************ 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:41:34.216 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:41:34.217 14:02:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:41:34.475 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:34.475 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:34.475 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:34.475 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:34.475 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:34.475 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:34.475 fio-3.35 00:41:34.475 Starting 6 threads 00:41:46.725 00:41:46.725 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74934: Wed Nov 20 14:02:42 2024 00:41:46.725 read: IOPS=29.5k, BW=115MiB/s (121MB/s)(1153MiB/10001msec) 00:41:46.725 slat (usec): min=2, max=1239, avg= 6.92, stdev= 6.52 00:41:46.725 clat (usec): min=124, max=4978, avg=632.54, stdev=246.50 00:41:46.725 lat (usec): min=128, max=4991, avg=639.46, stdev=247.40 00:41:46.725 clat percentiles (usec): 00:41:46.725 | 50.000th=[ 644], 99.000th=[ 1287], 99.900th=[ 2040], 99.990th=[ 3851], 00:41:46.725 | 99.999th=[ 4948] 00:41:46.725 write: IOPS=29.7k, BW=116MiB/s (122MB/s)(1161MiB/10001msec); 0 zone resets 00:41:46.725 slat (usec): min=12, max=6110, avg=25.90, stdev=35.36 00:41:46.725 clat (usec): min=83, max=8293, avg=735.41, stdev=270.43 00:41:46.725 lat (usec): min=98, max=8329, avg=761.31, stdev=273.82 00:41:46.725 clat percentiles (usec): 00:41:46.725 | 50.000th=[ 725], 99.000th=[ 1532], 99.900th=[ 2212], 99.990th=[ 4948], 00:41:46.725 | 99.999th=[ 7832] 00:41:46.725 bw ( KiB/s): min=97055, max=146383, per=99.71%, avg=118526.89, stdev=2719.40, samples=114 00:41:46.725 iops : min=24263, max=36595, avg=29630.95, stdev=679.81, samples=114 00:41:46.725 lat (usec) : 100=0.01%, 250=3.53%, 500=19.74%, 750=38.58%, 1000=29.62% 00:41:46.725 lat (msec) : 2=8.39%, 4=0.13%, 10=0.01% 00:41:46.725 cpu : usr=56.86%, sys=28.11%, ctx=7788, majf=0, minf=24967 00:41:46.725 IO depths : 1=11.8%, 2=24.1%, 4=50.8%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:46.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:46.725 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:46.725 issued rwts: total=295086,297208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:46.725 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:46.725 00:41:46.725 Run status group 0 (all jobs): 00:41:46.725 READ: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=1153MiB (1209MB), run=10001-10001msec 00:41:46.725 WRITE: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=1161MiB (1217MB), run=10001-10001msec 00:41:47.005 ----------------------------------------------------- 00:41:47.005 Suppressions used: 00:41:47.005 count bytes template 00:41:47.005 6 48 /usr/src/fio/parse.c 00:41:47.005 1904 182784 /usr/src/fio/iolog.c 00:41:47.005 1 8 libtcmalloc_minimal.so 00:41:47.005 1 904 libcrypto.so 00:41:47.005 ----------------------------------------------------- 00:41:47.005 00:41:47.005 00:41:47.005 real 0m12.652s 00:41:47.005 user 0m36.196s 00:41:47.005 sys 0m17.318s 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:47.005 ************************************ 00:41:47.005 END TEST bdev_fio_rw_verify 00:41:47.005 ************************************ 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:41:47.005 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:41:47.006 14:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "74c4ee17-9cd2-41e8-b76d-b61bdd0ddb32"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "74c4ee17-9cd2-41e8-b76d-b61bdd0ddb32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "e4db283b-4338-4101-9ee1-777d77d030be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e4db283b-4338-4101-9ee1-777d77d030be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "6d5a6323-4094-46fd-bdb1-8c12c0eecf06"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6d5a6323-4094-46fd-bdb1-8c12c0eecf06",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "1dee76d6-7869-47b1-9d5f-ae7c45d5d158"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1dee76d6-7869-47b1-9d5f-ae7c45d5d158",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "b2da9514-dcd8-4eba-8468-9068cf0ac18c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b2da9514-dcd8-4eba-8468-9068cf0ac18c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "dd0701b6-5676-4b87-be7f-3b122d97a11f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dd0701b6-5676-4b87-be7f-3b122d97a11f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:41:47.006 14:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:41:47.006 14:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:41:47.006 14:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:47.006 14:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:41:47.006 /home/vagrant/spdk_repo/spdk 00:41:47.006 14:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:41:47.006 14:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:41:47.006 00:41:47.006 real 0m12.865s 00:41:47.006 user 0m36.313s 00:41:47.006 sys 0m17.414s 00:41:47.006 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:47.006 14:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:41:47.006 ************************************ 00:41:47.006 END TEST bdev_fio 00:41:47.006 ************************************ 00:41:47.006 14:02:44 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:41:47.006 14:02:44 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:41:47.006 14:02:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:41:47.006 14:02:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:47.006 14:02:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:47.006 ************************************ 00:41:47.006 START TEST bdev_verify 00:41:47.006 ************************************ 00:41:47.006 14:02:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:41:47.264 [2024-11-20 14:02:44.387708] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:41:47.264 [2024-11-20 14:02:44.387887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75111 ] 00:41:47.523 [2024-11-20 14:02:44.589671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:47.523 [2024-11-20 14:02:44.752439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:47.523 [2024-11-20 14:02:44.752471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:48.093 Running I/O for 5 seconds... 00:41:50.404 24225.00 IOPS, 94.63 MiB/s [2024-11-20T14:02:48.663Z] 25648.00 IOPS, 100.19 MiB/s [2024-11-20T14:02:49.599Z] 24117.67 IOPS, 94.21 MiB/s [2024-11-20T14:02:50.535Z] 24216.25 IOPS, 94.59 MiB/s [2024-11-20T14:02:50.535Z] 24108.80 IOPS, 94.17 MiB/s 00:41:53.212 Latency(us) 00:41:53.212 [2024-11-20T14:02:50.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:53.212 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:53.212 Verification LBA range: start 0x0 length 0xbd0bd 00:41:53.212 nvme0n1 : 5.04 2799.58 10.94 0.00 0.00 45502.24 3994.58 81389.47 00:41:53.212 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:53.212 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:41:53.212 nvme0n1 : 5.04 2910.47 11.37 0.00 0.00 43843.01 3729.31 79891.50 00:41:53.212 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:53.212 Verification LBA range: start 0x0 length 0xa0000 00:41:53.212 nvme1n1 : 5.05 1799.18 7.03 0.00 0.00 70642.64 7770.70 84884.72 00:41:53.212 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:53.212 Verification LBA range: start 0xa0000 length 0xa0000 00:41:53.212 nvme1n1 : 5.06 1847.55 7.22 0.00 0.00 68837.30 5679.79 91375.91 00:41:53.212 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:53.212 Verification LBA range: start 0x0 length 0x80000 00:41:53.212 nvme2n1 : 5.06 1769.98 6.91 0.00 0.00 71674.90 11609.23 74898.29 00:41:53.212 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:53.212 Verification LBA range: start 0x80000 length 0x80000 00:41:53.212 nvme2n1 : 5.07 1843.74 7.20 0.00 0.00 68851.09 7708.28 91375.91 00:41:53.212 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:53.212 Verification LBA range: start 0x0 length 0x80000 00:41:53.212 nvme2n2 : 5.05 1772.92 6.93 0.00 0.00 71436.24 8987.79 76396.25 00:41:53.212 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:53.212 Verification LBA range: start 0x80000 length 0x80000 00:41:53.212 nvme2n2 : 5.06 1845.11 7.21 0.00 0.00 68682.94 9050.21 70903.71 00:41:53.212 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:53.212 Verification LBA range: start 0x0 length 0x80000 00:41:53.212 nvme2n3 : 5.06 1769.25 6.91 0.00 0.00 71463.59 7614.66 91375.91 00:41:53.212 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:53.212 Verification LBA range: start 0x80000 length 0x80000 00:41:53.212 nvme2n3 : 5.07 1842.88 7.20 0.00 0.00 68658.45 8550.89 73400.32 00:41:53.212 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:53.212 Verification LBA range: start 0x0 length 0x20000 00:41:53.212 nvme3n1 : 5.07 1790.75 7.00 0.00 0.00 70493.97 6085.49 94871.16 00:41:53.212 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:53.212 Verification LBA range: start 0x20000 length 0x20000 00:41:53.212 nvme3n1 : 5.07 1842.05 7.20 0.00 0.00 68578.10 8426.06 83386.76 00:41:53.212 [2024-11-20T14:02:50.535Z] =================================================================================================================== 00:41:53.212 [2024-11-20T14:02:50.535Z] Total : 23833.46 93.10 0.00 0.00 63882.08 3729.31 94871.16 00:41:54.588 00:41:54.588 real 0m7.419s 00:41:54.588 user 0m11.658s 00:41:54.588 sys 0m1.913s 00:41:54.588 14:02:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:54.588 14:02:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:41:54.588 ************************************ 00:41:54.588 END TEST bdev_verify 00:41:54.588 ************************************ 00:41:54.588 14:02:51 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:41:54.588 14:02:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:41:54.588 14:02:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:54.588 14:02:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:54.588 ************************************ 00:41:54.588 START TEST bdev_verify_big_io 00:41:54.588 ************************************ 00:41:54.588 14:02:51 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:41:54.588 [2024-11-20 14:02:51.832809] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:41:54.588 [2024-11-20 14:02:51.832931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75220 ] 00:41:54.847 [2024-11-20 14:02:51.999441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:54.847 [2024-11-20 14:02:52.121244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:54.847 [2024-11-20 14:02:52.121251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:55.415 Running I/O for 5 seconds... 00:42:01.993 2168.00 IOPS, 135.50 MiB/s [2024-11-20T14:02:59.316Z] 3464.00 IOPS, 216.50 MiB/s [2024-11-20T14:02:59.316Z] 3381.33 IOPS, 211.33 MiB/s 00:42:01.993 Latency(us) 00:42:01.993 [2024-11-20T14:02:59.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:01.993 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:01.993 Verification LBA range: start 0x0 length 0xbd0b 00:42:01.993 nvme0n1 : 6.00 181.42 11.34 0.00 0.00 694534.28 8426.06 842855.38 00:42:01.993 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:01.993 Verification LBA range: start 0xbd0b length 0xbd0b 00:42:01.993 nvme0n1 : 5.96 136.90 8.56 0.00 0.00 898657.73 22843.98 1517938.59 00:42:01.993 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:01.993 Verification LBA range: start 0x0 length 0xa000 00:42:01.993 nvme1n1 : 5.99 130.83 8.18 0.00 0.00 940184.14 17975.59 1062557.01 00:42:01.993 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:01.993 Verification LBA range: start 0xa000 length 0xa000 00:42:01.993 nvme1n1 : 5.99 122.97 7.69 0.00 0.00 995735.40 15354.15 1302231.53 00:42:01.993 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:01.993 Verification LBA range: start 0x0 length 0x8000 00:42:01.993 nvme2n1 : 6.00 136.01 8.50 0.00 0.00 881433.12 7614.66 1390112.18 00:42:01.993 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:01.993 Verification LBA range: start 0x8000 length 0x8000 00:42:01.993 nvme2n1 : 5.99 138.98 8.69 0.00 0.00 842657.93 16227.96 1382123.03 00:42:01.993 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:01.993 Verification LBA range: start 0x0 length 0x8000 00:42:01.993 nvme2n2 : 6.00 125.31 7.83 0.00 0.00 935023.84 12420.63 1797558.86 00:42:01.993 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:01.993 Verification LBA range: start 0x8000 length 0x8000 00:42:01.993 nvme2n2 : 5.98 157.78 9.86 0.00 0.00 737827.72 16477.62 974676.36 00:42:01.993 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:01.993 Verification LBA range: start 0x0 length 0x8000 00:42:01.993 nvme2n3 : 6.00 150.79 9.42 0.00 0.00 757080.75 6522.39 1166415.97 00:42:01.993 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:01.993 Verification LBA range: start 0x8000 length 0x8000 00:42:01.993 nvme2n3 : 5.97 136.61 8.54 0.00 0.00 830706.59 13544.11 1174405.12 00:42:01.993 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:01.993 Verification LBA range: start 0x0 length 0x2000 00:42:01.993 nvme3n1 : 6.00 139.94 8.75 0.00 0.00 794374.86 4805.97 1070546.16 00:42:01.993 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:01.993 Verification LBA range: start 0x2000 length 0x2000 00:42:01.993 nvme3n1 : 5.98 147.19 9.20 0.00 0.00 751663.43 4462.69 1046578.71 00:42:01.993 [2024-11-20T14:02:59.316Z] =================================================================================================================== 00:42:01.993 [2024-11-20T14:02:59.316Z] Total : 1704.71 106.54 0.00 0.00 829732.37 4462.69 1797558.86 00:42:02.928 00:42:02.928 real 0m8.461s 00:42:02.928 user 0m15.447s 00:42:02.928 sys 0m0.536s 00:42:02.928 14:03:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:02.928 14:03:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:42:02.928 ************************************ 00:42:02.928 END TEST bdev_verify_big_io 00:42:02.928 ************************************ 00:42:03.187 14:03:00 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:03.187 14:03:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:42:03.187 14:03:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:03.187 14:03:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:42:03.187 ************************************ 00:42:03.187 START TEST bdev_write_zeroes 00:42:03.187 ************************************ 00:42:03.187 14:03:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:03.187 [2024-11-20 14:03:00.373263] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:03.187 [2024-11-20 14:03:00.373407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75336 ] 00:42:03.447 [2024-11-20 14:03:00.550013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:03.447 [2024-11-20 14:03:00.668513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:04.015 Running I/O for 1 seconds... 00:42:04.952 79379.00 IOPS, 310.07 MiB/s 00:42:04.952 Latency(us) 00:42:04.952 [2024-11-20T14:03:02.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:04.952 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:04.952 nvme0n1 : 1.04 17376.08 67.88 0.00 0.00 7330.75 1435.55 27213.04 00:42:04.952 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:04.952 nvme1n1 : 1.03 12151.98 47.47 0.00 0.00 10452.38 5118.05 31956.60 00:42:04.952 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:04.952 nvme2n1 : 1.03 12008.31 46.91 0.00 0.00 10571.17 6085.49 32705.58 00:42:04.952 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:04.953 nvme2n2 : 1.04 11991.10 46.84 0.00 0.00 10579.88 6366.35 33204.91 00:42:04.953 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:04.953 nvme2n3 : 1.04 11974.00 46.77 0.00 0.00 10588.37 6303.94 33454.57 00:42:04.953 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:04.953 nvme3n1 : 1.04 11956.45 46.70 0.00 0.00 10596.38 6241.52 33953.89 00:42:04.953 [2024-11-20T14:03:02.276Z] =================================================================================================================== 00:42:04.953 [2024-11-20T14:03:02.276Z] Total : 77457.91 302.57 0.00 0.00 9831.21 1435.55 33953.89 00:42:06.331 00:42:06.331 real 0m3.163s 00:42:06.331 user 0m2.293s 00:42:06.331 sys 0m0.687s 00:42:06.331 14:03:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:06.331 14:03:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:42:06.331 ************************************ 00:42:06.331 END TEST bdev_write_zeroes 00:42:06.331 ************************************ 00:42:06.331 14:03:03 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:06.331 14:03:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:42:06.331 14:03:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:06.331 14:03:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:42:06.331 ************************************ 00:42:06.331 START TEST bdev_json_nonenclosed 00:42:06.331 ************************************ 00:42:06.331 14:03:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:06.331 [2024-11-20 14:03:03.584551] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:06.331 [2024-11-20 14:03:03.584684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75413 ] 00:42:06.590 [2024-11-20 14:03:03.754896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:06.590 [2024-11-20 14:03:03.869183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:06.590 [2024-11-20 14:03:03.869289] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:42:06.590 [2024-11-20 14:03:03.869311] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:06.590 [2024-11-20 14:03:03.869324] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:06.848 00:42:06.848 real 0m0.630s 00:42:06.848 user 0m0.378s 00:42:06.848 sys 0m0.148s 00:42:06.848 14:03:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:06.848 14:03:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:42:06.848 ************************************ 00:42:06.848 END TEST bdev_json_nonenclosed 00:42:06.848 ************************************ 00:42:07.107 14:03:04 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:07.107 14:03:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:42:07.107 14:03:04 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:07.107 14:03:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:42:07.107 ************************************ 00:42:07.107 START TEST bdev_json_nonarray 00:42:07.107 ************************************ 00:42:07.107 14:03:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:07.107 [2024-11-20 14:03:04.320991] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:07.107 [2024-11-20 14:03:04.321187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75438 ] 00:42:07.366 [2024-11-20 14:03:04.519102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:07.366 [2024-11-20 14:03:04.638602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:07.366 [2024-11-20 14:03:04.638712] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:42:07.366 [2024-11-20 14:03:04.638735] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:07.366 [2024-11-20 14:03:04.638747] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:07.625 00:42:07.625 real 0m0.717s 00:42:07.625 user 0m0.431s 00:42:07.625 sys 0m0.181s 00:42:07.625 14:03:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:07.625 14:03:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:42:07.625 ************************************ 00:42:07.625 END TEST bdev_json_nonarray 00:42:07.625 ************************************ 00:42:07.883 14:03:04 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:42:07.883 14:03:04 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:42:07.883 14:03:04 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:42:07.883 14:03:04 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:42:07.883 14:03:04 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:42:07.883 14:03:04 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:42:07.883 14:03:04 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:07.883 14:03:04 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:42:07.883 14:03:04 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:42:07.883 14:03:04 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:42:07.883 14:03:04 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:42:07.883 14:03:04 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:42:08.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:42:13.724 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:42:13.724 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:42:13.724 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:42:13.724 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:42:13.724 00:42:13.724 real 1m2.528s 00:42:13.724 user 1m38.964s 00:42:13.724 sys 0m33.592s 00:42:13.724 14:03:10 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:13.724 14:03:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:42:13.724 ************************************ 00:42:13.724 END TEST blockdev_xnvme 00:42:13.724 ************************************ 00:42:13.724 14:03:10 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:42:13.724 14:03:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:13.724 14:03:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:13.724 14:03:10 -- common/autotest_common.sh@10 -- # set +x 00:42:13.724 ************************************ 00:42:13.724 START TEST ublk 00:42:13.724 ************************************ 00:42:13.724 14:03:10 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:42:13.724 * Looking for test storage... 00:42:13.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:42:13.724 14:03:10 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:13.724 14:03:10 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:42:13.725 14:03:10 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:13.725 14:03:10 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:13.725 14:03:10 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:13.725 14:03:10 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:13.725 14:03:10 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:13.725 14:03:10 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:42:13.725 14:03:10 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:42:13.725 14:03:10 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:42:13.725 14:03:10 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:42:13.725 14:03:10 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:42:13.725 14:03:10 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:42:13.725 14:03:10 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:42:13.725 14:03:10 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:13.725 14:03:10 ublk -- scripts/common.sh@344 -- # case "$op" in 00:42:13.725 14:03:10 ublk -- scripts/common.sh@345 -- # : 1 00:42:13.725 14:03:10 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:13.725 14:03:10 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:13.725 14:03:10 ublk -- scripts/common.sh@365 -- # decimal 1 00:42:13.725 14:03:10 ublk -- scripts/common.sh@353 -- # local d=1 00:42:13.725 14:03:10 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:13.725 14:03:10 ublk -- scripts/common.sh@355 -- # echo 1 00:42:13.725 14:03:10 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:42:13.725 14:03:10 ublk -- scripts/common.sh@366 -- # decimal 2 00:42:13.725 14:03:10 ublk -- scripts/common.sh@353 -- # local d=2 00:42:13.725 14:03:10 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:13.725 14:03:10 ublk -- scripts/common.sh@355 -- # echo 2 00:42:13.725 14:03:10 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:42:13.725 14:03:10 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:13.725 14:03:10 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:13.725 14:03:10 ublk -- scripts/common.sh@368 -- # return 0 00:42:13.725 14:03:10 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:13.725 14:03:10 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:13.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.725 --rc genhtml_branch_coverage=1 00:42:13.725 --rc genhtml_function_coverage=1 00:42:13.725 --rc genhtml_legend=1 00:42:13.725 --rc geninfo_all_blocks=1 00:42:13.725 --rc geninfo_unexecuted_blocks=1 00:42:13.725 00:42:13.725 ' 00:42:13.725 14:03:10 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:13.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.725 --rc genhtml_branch_coverage=1 00:42:13.725 --rc genhtml_function_coverage=1 00:42:13.725 --rc genhtml_legend=1 00:42:13.725 --rc geninfo_all_blocks=1 00:42:13.725 --rc geninfo_unexecuted_blocks=1 00:42:13.725 00:42:13.725 ' 00:42:13.725 14:03:10 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:13.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.725 --rc genhtml_branch_coverage=1 00:42:13.725 --rc genhtml_function_coverage=1 00:42:13.725 --rc genhtml_legend=1 00:42:13.725 --rc geninfo_all_blocks=1 00:42:13.725 --rc geninfo_unexecuted_blocks=1 00:42:13.725 00:42:13.725 ' 00:42:13.725 14:03:10 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:13.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.725 --rc genhtml_branch_coverage=1 00:42:13.725 --rc genhtml_function_coverage=1 00:42:13.725 --rc genhtml_legend=1 00:42:13.725 --rc geninfo_all_blocks=1 00:42:13.725 --rc geninfo_unexecuted_blocks=1 00:42:13.725 00:42:13.725 ' 00:42:13.725 14:03:10 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:42:13.725 14:03:10 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:42:13.725 14:03:10 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:42:13.725 14:03:10 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:42:13.725 14:03:10 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:42:13.725 14:03:10 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:42:13.725 14:03:10 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:42:13.725 14:03:10 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:42:13.725 14:03:10 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:42:13.725 14:03:10 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:42:13.725 14:03:10 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:42:13.725 14:03:10 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:42:13.725 14:03:10 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:42:13.725 14:03:10 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:42:13.725 14:03:10 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:42:13.725 14:03:10 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:42:13.725 14:03:10 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:42:13.725 14:03:10 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:42:13.725 14:03:10 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:42:13.725 14:03:10 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:42:13.725 14:03:10 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:13.725 14:03:10 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:13.725 14:03:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:42:13.725 ************************************ 00:42:13.725 START TEST test_save_ublk_config 00:42:13.725 ************************************ 00:42:13.725 14:03:10 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:42:13.725 14:03:10 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:42:13.725 14:03:10 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75744 00:42:13.725 14:03:10 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:42:13.725 14:03:10 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:42:13.725 14:03:10 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75744 00:42:13.725 14:03:10 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75744 ']' 00:42:13.725 14:03:10 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:13.725 14:03:10 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:13.725 14:03:10 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:13.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:13.725 14:03:10 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:13.725 14:03:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:42:13.725 [2024-11-20 14:03:11.037910] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:13.725 [2024-11-20 14:03:11.038309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75744 ] 00:42:13.986 [2024-11-20 14:03:11.242530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:14.247 [2024-11-20 14:03:11.409174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.182 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:15.182 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:42:15.182 14:03:12 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:42:15.182 14:03:12 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:42:15.182 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.182 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:42:15.182 [2024-11-20 14:03:12.379504] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:42:15.182 [2024-11-20 14:03:12.380700] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:42:15.182 malloc0 00:42:15.182 [2024-11-20 14:03:12.459950] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:42:15.182 [2024-11-20 14:03:12.460045] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:42:15.182 [2024-11-20 14:03:12.460059] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:42:15.183 [2024-11-20 14:03:12.460067] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:42:15.183 [2024-11-20 14:03:12.468628] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:42:15.183 [2024-11-20 14:03:12.468660] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:42:15.183 [2024-11-20 14:03:12.475519] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:42:15.183 [2024-11-20 14:03:12.475634] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:42:15.183 [2024-11-20 14:03:12.492520] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:42:15.183 0 00:42:15.183 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.183 14:03:12 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:42:15.183 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.183 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:42:15.751 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.751 14:03:12 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:42:15.751 "subsystems": [ 00:42:15.751 { 00:42:15.751 "subsystem": "fsdev", 00:42:15.751 "config": [ 00:42:15.751 { 00:42:15.751 "method": "fsdev_set_opts", 00:42:15.751 "params": { 00:42:15.751 "fsdev_io_pool_size": 65535, 00:42:15.751 "fsdev_io_cache_size": 256 00:42:15.751 } 00:42:15.751 } 00:42:15.751 ] 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "subsystem": "keyring", 00:42:15.751 "config": [] 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "subsystem": "iobuf", 00:42:15.751 "config": [ 00:42:15.751 { 00:42:15.751 "method": "iobuf_set_options", 00:42:15.751 "params": { 00:42:15.751 "small_pool_count": 8192, 00:42:15.751 "large_pool_count": 1024, 00:42:15.751 "small_bufsize": 8192, 00:42:15.751 "large_bufsize": 135168, 00:42:15.751 "enable_numa": false 00:42:15.751 } 00:42:15.751 } 00:42:15.751 ] 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "subsystem": "sock", 00:42:15.751 "config": [ 00:42:15.751 { 00:42:15.751 "method": "sock_set_default_impl", 00:42:15.751 "params": { 00:42:15.751 "impl_name": "posix" 00:42:15.751 } 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "method": "sock_impl_set_options", 00:42:15.751 "params": { 00:42:15.751 "impl_name": "ssl", 00:42:15.751 "recv_buf_size": 4096, 00:42:15.751 "send_buf_size": 4096, 00:42:15.751 "enable_recv_pipe": true, 00:42:15.751 "enable_quickack": false, 00:42:15.751 "enable_placement_id": 0, 00:42:15.751 "enable_zerocopy_send_server": true, 00:42:15.751 "enable_zerocopy_send_client": false, 00:42:15.751 "zerocopy_threshold": 0, 00:42:15.751 "tls_version": 0, 00:42:15.751 "enable_ktls": false 00:42:15.751 } 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "method": "sock_impl_set_options", 00:42:15.751 "params": { 00:42:15.751 "impl_name": "posix", 00:42:15.751 "recv_buf_size": 2097152, 00:42:15.751 "send_buf_size": 2097152, 00:42:15.751 "enable_recv_pipe": true, 00:42:15.751 "enable_quickack": false, 00:42:15.751 "enable_placement_id": 0, 00:42:15.751 "enable_zerocopy_send_server": true, 00:42:15.751 "enable_zerocopy_send_client": false, 00:42:15.751 "zerocopy_threshold": 0, 00:42:15.751 "tls_version": 0, 00:42:15.751 "enable_ktls": false 00:42:15.751 } 00:42:15.751 } 00:42:15.751 ] 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "subsystem": "vmd", 00:42:15.751 "config": [] 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "subsystem": "accel", 00:42:15.751 "config": [ 00:42:15.751 { 00:42:15.751 "method": "accel_set_options", 00:42:15.751 "params": { 00:42:15.751 "small_cache_size": 128, 00:42:15.751 "large_cache_size": 16, 00:42:15.751 "task_count": 2048, 00:42:15.751 "sequence_count": 2048, 00:42:15.751 "buf_count": 2048 00:42:15.751 } 00:42:15.751 } 00:42:15.751 ] 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "subsystem": "bdev", 00:42:15.751 "config": [ 00:42:15.751 { 00:42:15.751 "method": "bdev_set_options", 00:42:15.751 "params": { 00:42:15.751 "bdev_io_pool_size": 65535, 00:42:15.751 "bdev_io_cache_size": 256, 00:42:15.751 "bdev_auto_examine": true, 00:42:15.751 "iobuf_small_cache_size": 128, 00:42:15.751 "iobuf_large_cache_size": 16 00:42:15.751 } 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "method": "bdev_raid_set_options", 00:42:15.751 "params": { 00:42:15.751 "process_window_size_kb": 1024, 00:42:15.751 "process_max_bandwidth_mb_sec": 0 00:42:15.751 } 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "method": "bdev_iscsi_set_options", 00:42:15.751 "params": { 00:42:15.751 "timeout_sec": 30 00:42:15.751 } 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "method": "bdev_nvme_set_options", 00:42:15.751 "params": { 00:42:15.751 "action_on_timeout": "none", 00:42:15.751 "timeout_us": 0, 00:42:15.751 "timeout_admin_us": 0, 00:42:15.751 "keep_alive_timeout_ms": 10000, 00:42:15.751 "arbitration_burst": 0, 00:42:15.751 "low_priority_weight": 0, 00:42:15.751 "medium_priority_weight": 0, 00:42:15.751 "high_priority_weight": 0, 00:42:15.751 "nvme_adminq_poll_period_us": 10000, 00:42:15.751 "nvme_ioq_poll_period_us": 0, 00:42:15.751 "io_queue_requests": 0, 00:42:15.751 "delay_cmd_submit": true, 00:42:15.751 "transport_retry_count": 4, 00:42:15.751 "bdev_retry_count": 3, 00:42:15.751 "transport_ack_timeout": 0, 00:42:15.751 "ctrlr_loss_timeout_sec": 0, 00:42:15.751 "reconnect_delay_sec": 0, 00:42:15.751 "fast_io_fail_timeout_sec": 0, 00:42:15.751 "disable_auto_failback": false, 00:42:15.751 "generate_uuids": false, 00:42:15.751 "transport_tos": 0, 00:42:15.751 "nvme_error_stat": false, 00:42:15.751 "rdma_srq_size": 0, 00:42:15.751 "io_path_stat": false, 00:42:15.751 "allow_accel_sequence": false, 00:42:15.751 "rdma_max_cq_size": 0, 00:42:15.751 "rdma_cm_event_timeout_ms": 0, 00:42:15.751 "dhchap_digests": [ 00:42:15.751 "sha256", 00:42:15.751 "sha384", 00:42:15.751 "sha512" 00:42:15.751 ], 00:42:15.751 "dhchap_dhgroups": [ 00:42:15.751 "null", 00:42:15.751 "ffdhe2048", 00:42:15.751 "ffdhe3072", 00:42:15.751 "ffdhe4096", 00:42:15.751 "ffdhe6144", 00:42:15.751 "ffdhe8192" 00:42:15.751 ] 00:42:15.751 } 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "method": "bdev_nvme_set_hotplug", 00:42:15.751 "params": { 00:42:15.751 "period_us": 100000, 00:42:15.751 "enable": false 00:42:15.751 } 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "method": "bdev_malloc_create", 00:42:15.751 "params": { 00:42:15.751 "name": "malloc0", 00:42:15.751 "num_blocks": 8192, 00:42:15.751 "block_size": 4096, 00:42:15.751 "physical_block_size": 4096, 00:42:15.751 "uuid": "bf8735b0-c247-4ad5-a21d-964a1ea23b10", 00:42:15.751 "optimal_io_boundary": 0, 00:42:15.751 "md_size": 0, 00:42:15.751 "dif_type": 0, 00:42:15.751 "dif_is_head_of_md": false, 00:42:15.751 "dif_pi_format": 0 00:42:15.751 } 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "method": "bdev_wait_for_examine" 00:42:15.751 } 00:42:15.751 ] 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "subsystem": "scsi", 00:42:15.751 "config": null 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "subsystem": "scheduler", 00:42:15.751 "config": [ 00:42:15.751 { 00:42:15.751 "method": "framework_set_scheduler", 00:42:15.751 "params": { 00:42:15.751 "name": "static" 00:42:15.751 } 00:42:15.751 } 00:42:15.751 ] 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "subsystem": "vhost_scsi", 00:42:15.751 "config": [] 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "subsystem": "vhost_blk", 00:42:15.751 "config": [] 00:42:15.751 }, 00:42:15.751 { 00:42:15.751 "subsystem": "ublk", 00:42:15.751 "config": [ 00:42:15.751 { 00:42:15.751 "method": "ublk_create_target", 00:42:15.751 "params": { 00:42:15.751 "cpumask": "1" 00:42:15.751 } 00:42:15.752 }, 00:42:15.752 { 00:42:15.752 "method": "ublk_start_disk", 00:42:15.752 "params": { 00:42:15.752 "bdev_name": "malloc0", 00:42:15.752 "ublk_id": 0, 00:42:15.752 "num_queues": 1, 00:42:15.752 "queue_depth": 128 00:42:15.752 } 00:42:15.752 } 00:42:15.752 ] 00:42:15.752 }, 00:42:15.752 { 00:42:15.752 "subsystem": "nbd", 00:42:15.752 "config": [] 00:42:15.752 }, 00:42:15.752 { 00:42:15.752 "subsystem": "nvmf", 00:42:15.752 "config": [ 00:42:15.752 { 00:42:15.752 "method": "nvmf_set_config", 00:42:15.752 "params": { 00:42:15.752 "discovery_filter": "match_any", 00:42:15.752 "admin_cmd_passthru": { 00:42:15.752 "identify_ctrlr": false 00:42:15.752 }, 00:42:15.752 "dhchap_digests": [ 00:42:15.752 "sha256", 00:42:15.752 "sha384", 00:42:15.752 "sha512" 00:42:15.752 ], 00:42:15.752 "dhchap_dhgroups": [ 00:42:15.752 "null", 00:42:15.752 "ffdhe2048", 00:42:15.752 "ffdhe3072", 00:42:15.752 "ffdhe4096", 00:42:15.752 "ffdhe6144", 00:42:15.752 "ffdhe8192" 00:42:15.752 ] 00:42:15.752 } 00:42:15.752 }, 00:42:15.752 { 00:42:15.752 "method": "nvmf_set_max_subsystems", 00:42:15.752 "params": { 00:42:15.752 "max_subsystems": 1024 00:42:15.752 } 00:42:15.752 }, 00:42:15.752 { 00:42:15.752 "method": "nvmf_set_crdt", 00:42:15.752 "params": { 00:42:15.752 "crdt1": 0, 00:42:15.752 "crdt2": 0, 00:42:15.752 "crdt3": 0 00:42:15.752 } 00:42:15.752 } 00:42:15.752 ] 00:42:15.752 }, 00:42:15.752 { 00:42:15.752 "subsystem": "iscsi", 00:42:15.752 "config": [ 00:42:15.752 { 00:42:15.752 "method": "iscsi_set_options", 00:42:15.752 "params": { 00:42:15.752 "node_base": "iqn.2016-06.io.spdk", 00:42:15.752 "max_sessions": 128, 00:42:15.752 "max_connections_per_session": 2, 00:42:15.752 "max_queue_depth": 64, 00:42:15.752 "default_time2wait": 2, 00:42:15.752 "default_time2retain": 20, 00:42:15.752 "first_burst_length": 8192, 00:42:15.752 "immediate_data": true, 00:42:15.752 "allow_duplicated_isid": false, 00:42:15.752 "error_recovery_level": 0, 00:42:15.752 "nop_timeout": 60, 00:42:15.752 "nop_in_interval": 30, 00:42:15.752 "disable_chap": false, 00:42:15.752 "require_chap": false, 00:42:15.752 "mutual_chap": false, 00:42:15.752 "chap_group": 0, 00:42:15.752 "max_large_datain_per_connection": 64, 00:42:15.752 "max_r2t_per_connection": 4, 00:42:15.752 "pdu_pool_size": 36864, 00:42:15.752 "immediate_data_pool_size": 16384, 00:42:15.752 "data_out_pool_size": 2048 00:42:15.752 } 00:42:15.752 } 00:42:15.752 ] 00:42:15.752 } 00:42:15.752 ] 00:42:15.752 }' 00:42:15.752 14:03:12 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75744 00:42:15.752 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75744 ']' 00:42:15.752 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75744 00:42:15.752 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:42:15.752 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:15.752 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75744 00:42:15.752 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:15.752 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:15.752 killing process with pid 75744 00:42:15.752 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75744' 00:42:15.752 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75744 00:42:15.752 14:03:12 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75744 00:42:17.129 [2024-11-20 14:03:14.326251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:42:17.129 [2024-11-20 14:03:14.357586] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:42:17.129 [2024-11-20 14:03:14.357720] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:42:17.129 [2024-11-20 14:03:14.365541] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:42:17.129 [2024-11-20 14:03:14.365601] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:42:17.129 [2024-11-20 14:03:14.365620] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:42:17.129 [2024-11-20 14:03:14.365648] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:42:17.129 [2024-11-20 14:03:14.365800] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:42:19.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:19.030 14:03:16 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75814 00:42:19.030 14:03:16 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75814 00:42:19.030 14:03:16 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75814 ']' 00:42:19.030 14:03:16 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:19.030 14:03:16 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:19.030 14:03:16 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:19.030 14:03:16 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:19.030 14:03:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:42:19.030 14:03:16 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:42:19.030 14:03:16 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:42:19.030 "subsystems": [ 00:42:19.030 { 00:42:19.030 "subsystem": "fsdev", 00:42:19.030 "config": [ 00:42:19.030 { 00:42:19.030 "method": "fsdev_set_opts", 00:42:19.030 "params": { 00:42:19.030 "fsdev_io_pool_size": 65535, 00:42:19.030 "fsdev_io_cache_size": 256 00:42:19.030 } 00:42:19.030 } 00:42:19.030 ] 00:42:19.030 }, 00:42:19.030 { 00:42:19.030 "subsystem": "keyring", 00:42:19.030 "config": [] 00:42:19.030 }, 00:42:19.030 { 00:42:19.030 "subsystem": "iobuf", 00:42:19.030 "config": [ 00:42:19.030 { 00:42:19.030 "method": "iobuf_set_options", 00:42:19.030 "params": { 00:42:19.030 "small_pool_count": 8192, 00:42:19.030 "large_pool_count": 1024, 00:42:19.031 "small_bufsize": 8192, 00:42:19.031 "large_bufsize": 135168, 00:42:19.031 "enable_numa": false 00:42:19.031 } 00:42:19.031 } 00:42:19.031 ] 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "subsystem": "sock", 00:42:19.031 "config": [ 00:42:19.031 { 00:42:19.031 "method": "sock_set_default_impl", 00:42:19.031 "params": { 00:42:19.031 "impl_name": "posix" 00:42:19.031 } 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "method": "sock_impl_set_options", 00:42:19.031 "params": { 00:42:19.031 "impl_name": "ssl", 00:42:19.031 "recv_buf_size": 4096, 00:42:19.031 "send_buf_size": 4096, 00:42:19.031 "enable_recv_pipe": true, 00:42:19.031 "enable_quickack": false, 00:42:19.031 "enable_placement_id": 0, 00:42:19.031 "enable_zerocopy_send_server": true, 00:42:19.031 "enable_zerocopy_send_client": false, 00:42:19.031 "zerocopy_threshold": 0, 00:42:19.031 "tls_version": 0, 00:42:19.031 "enable_ktls": false 00:42:19.031 } 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "method": "sock_impl_set_options", 00:42:19.031 "params": { 00:42:19.031 "impl_name": "posix", 00:42:19.031 "recv_buf_size": 2097152, 00:42:19.031 "send_buf_size": 2097152, 00:42:19.031 "enable_recv_pipe": true, 00:42:19.031 "enable_quickack": false, 00:42:19.031 "enable_placement_id": 0, 00:42:19.031 "enable_zerocopy_send_server": true, 00:42:19.031 "enable_zerocopy_send_client": false, 00:42:19.031 "zerocopy_threshold": 0, 00:42:19.031 "tls_version": 0, 00:42:19.031 "enable_ktls": false 00:42:19.031 } 00:42:19.031 } 00:42:19.031 ] 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "subsystem": "vmd", 00:42:19.031 "config": [] 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "subsystem": "accel", 00:42:19.031 "config": [ 00:42:19.031 { 00:42:19.031 "method": "accel_set_options", 00:42:19.031 "params": { 00:42:19.031 "small_cache_size": 128, 00:42:19.031 "large_cache_size": 16, 00:42:19.031 "task_count": 2048, 00:42:19.031 "sequence_count": 2048, 00:42:19.031 "buf_count": 2048 00:42:19.031 } 00:42:19.031 } 00:42:19.031 ] 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "subsystem": "bdev", 00:42:19.031 "config": [ 00:42:19.031 { 00:42:19.031 "method": "bdev_set_options", 00:42:19.031 "params": { 00:42:19.031 "bdev_io_pool_size": 65535, 00:42:19.031 "bdev_io_cache_size": 256, 00:42:19.031 "bdev_auto_examine": true, 00:42:19.031 "iobuf_small_cache_size": 128, 00:42:19.031 "iobuf_large_cache_size": 16 00:42:19.031 } 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "method": "bdev_raid_set_options", 00:42:19.031 "params": { 00:42:19.031 "process_window_size_kb": 1024, 00:42:19.031 "process_max_bandwidth_mb_sec": 0 00:42:19.031 } 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "method": "bdev_iscsi_set_options", 00:42:19.031 "params": { 00:42:19.031 "timeout_sec": 30 00:42:19.031 } 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "method": "bdev_nvme_set_options", 00:42:19.031 "params": { 00:42:19.031 "action_on_timeout": "none", 00:42:19.031 "timeout_us": 0, 00:42:19.031 "timeout_admin_us": 0, 00:42:19.031 "keep_alive_timeout_ms": 10000, 00:42:19.031 "arbitration_burst": 0, 00:42:19.031 "low_priority_weight": 0, 00:42:19.031 "medium_priority_weight": 0, 00:42:19.031 "high_priority_weight": 0, 00:42:19.031 "nvme_adminq_poll_period_us": 10000, 00:42:19.031 "nvme_ioq_poll_period_us": 0, 00:42:19.031 "io_queue_requests": 0, 00:42:19.031 "delay_cmd_submit": true, 00:42:19.031 "transport_retry_count": 4, 00:42:19.031 "bdev_retry_count": 3, 00:42:19.031 "transport_ack_timeout": 0, 00:42:19.031 "ctrlr_loss_timeout_sec": 0, 00:42:19.031 "reconnect_delay_sec": 0, 00:42:19.031 "fast_io_fail_timeout_sec": 0, 00:42:19.031 "disable_auto_failback": false, 00:42:19.031 "generate_uuids": false, 00:42:19.031 "transport_tos": 0, 00:42:19.031 "nvme_error_stat": false, 00:42:19.031 "rdma_srq_size": 0, 00:42:19.031 "io_path_stat": false, 00:42:19.031 "allow_accel_sequence": false, 00:42:19.031 "rdma_max_cq_size": 0, 00:42:19.031 "rdma_cm_event_timeout_ms": 0, 00:42:19.031 "dhchap_digests": [ 00:42:19.031 "sha256", 00:42:19.031 "sha384", 00:42:19.031 "sha512" 00:42:19.031 ], 00:42:19.031 "dhchap_dhgroups": [ 00:42:19.031 "null", 00:42:19.031 "ffdhe2048", 00:42:19.031 "ffdhe3072", 00:42:19.031 "ffdhe4096", 00:42:19.031 "ffdhe6144", 00:42:19.031 "ffdhe8192" 00:42:19.031 ] 00:42:19.031 } 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "method": "bdev_nvme_set_hotplug", 00:42:19.031 "params": { 00:42:19.031 "period_us": 100000, 00:42:19.031 "enable": false 00:42:19.031 } 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "method": "bdev_malloc_create", 00:42:19.031 "params": { 00:42:19.031 "name": "malloc0", 00:42:19.031 "num_blocks": 8192, 00:42:19.031 "block_size": 4096, 00:42:19.031 "physical_block_size": 4096, 00:42:19.031 "uuid": "bf8735b0-c247-4ad5-a21d-964a1ea23b10", 00:42:19.031 "optimal_io_boundary": 0, 00:42:19.031 "md_size": 0, 00:42:19.031 "dif_type": 0, 00:42:19.031 "dif_is_head_of_md": false, 00:42:19.031 "dif_pi_format": 0 00:42:19.031 } 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "method": "bdev_wait_for_examine" 00:42:19.031 } 00:42:19.031 ] 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "subsystem": "scsi", 00:42:19.031 "config": null 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "subsystem": "scheduler", 00:42:19.031 "config": [ 00:42:19.031 { 00:42:19.031 "method": "framework_set_scheduler", 00:42:19.031 "params": { 00:42:19.031 "name": "static" 00:42:19.031 } 00:42:19.031 } 00:42:19.031 ] 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "subsystem": "vhost_scsi", 00:42:19.031 "config": [] 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "subsystem": "vhost_blk", 00:42:19.031 "config": [] 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "subsystem": "ublk", 00:42:19.031 "config": [ 00:42:19.031 { 00:42:19.031 "method": "ublk_create_target", 00:42:19.031 "params": { 00:42:19.031 "cpumask": "1" 00:42:19.031 } 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "method": "ublk_start_disk", 00:42:19.031 "params": { 00:42:19.031 "bdev_name": "malloc0", 00:42:19.031 "ublk_id": 0, 00:42:19.031 "num_queues": 1, 00:42:19.031 "queue_depth": 128 00:42:19.031 } 00:42:19.031 } 00:42:19.031 ] 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "subsystem": "nbd", 00:42:19.031 "config": [] 00:42:19.031 }, 00:42:19.031 { 00:42:19.031 "subsystem": "nvmf", 00:42:19.032 "config": [ 00:42:19.032 { 00:42:19.032 "method": "nvmf_set_config", 00:42:19.032 "params": { 00:42:19.032 "discovery_filter": "match_any", 00:42:19.032 "admin_cmd_passthru": { 00:42:19.032 "identify_ctrlr": false 00:42:19.032 }, 00:42:19.032 "dhchap_digests": [ 00:42:19.032 "sha256", 00:42:19.032 "sha384", 00:42:19.032 "sha512" 00:42:19.032 ], 00:42:19.032 "dhchap_dhgroups": [ 00:42:19.032 "null", 00:42:19.032 "ffdhe2048", 00:42:19.032 "ffdhe3072", 00:42:19.032 "ffdhe4096", 00:42:19.032 "ffdhe6144", 00:42:19.032 "ffdhe8192" 00:42:19.032 ] 00:42:19.032 } 00:42:19.032 }, 00:42:19.032 { 00:42:19.032 "method": "nvmf_set_max_subsystems", 00:42:19.032 "params": { 00:42:19.032 "max_subsystems": 1024 00:42:19.032 } 00:42:19.032 }, 00:42:19.032 { 00:42:19.032 "method": "nvmf_set_crdt", 00:42:19.032 "params": { 00:42:19.032 "crdt1": 0, 00:42:19.032 "crdt2": 0, 00:42:19.032 "crdt3": 0 00:42:19.032 } 00:42:19.032 } 00:42:19.032 ] 00:42:19.032 }, 00:42:19.032 { 00:42:19.032 "subsystem": "iscsi", 00:42:19.032 "config": [ 00:42:19.032 { 00:42:19.032 "method": "iscsi_set_options", 00:42:19.032 "params": { 00:42:19.032 "node_base": "iqn.2016-06.io.spdk", 00:42:19.032 "max_sessions": 128, 00:42:19.032 "max_connections_per_session": 2, 00:42:19.032 "max_queue_depth": 64, 00:42:19.032 "default_time2wait": 2, 00:42:19.032 "default_time2retain": 20, 00:42:19.032 "first_burst_length": 8192, 00:42:19.032 "immediate_data": true, 00:42:19.032 "allow_duplicated_isid": false, 00:42:19.032 "error_recovery_level": 0, 00:42:19.032 "nop_timeout": 60, 00:42:19.032 "nop_in_interval": 30, 00:42:19.032 "disable_chap": false, 00:42:19.032 "require_chap": false, 00:42:19.032 "mutual_chap": false, 00:42:19.032 "chap_group": 0, 00:42:19.032 "max_large_datain_per_connection": 64, 00:42:19.032 "max_r2t_per_connection": 4, 00:42:19.032 "pdu_pool_size": 36864, 00:42:19.032 "immediate_data_pool_size": 16384, 00:42:19.032 "data_out_pool_size": 2048 00:42:19.032 } 00:42:19.032 } 00:42:19.032 ] 00:42:19.032 } 00:42:19.032 ] 00:42:19.032 }' 00:42:19.290 [2024-11-20 14:03:16.403141] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:19.290 [2024-11-20 14:03:16.403320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75814 ] 00:42:19.290 [2024-11-20 14:03:16.587838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:19.548 [2024-11-20 14:03:16.699993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:20.483 [2024-11-20 14:03:17.763511] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:42:20.483 [2024-11-20 14:03:17.764811] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:42:20.483 [2024-11-20 14:03:17.771636] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:42:20.483 [2024-11-20 14:03:17.771716] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:42:20.483 [2024-11-20 14:03:17.771729] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:42:20.483 [2024-11-20 14:03:17.771738] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:42:20.483 [2024-11-20 14:03:17.778500] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:42:20.483 [2024-11-20 14:03:17.778524] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:42:20.483 [2024-11-20 14:03:17.786512] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:42:20.483 [2024-11-20 14:03:17.786605] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:42:20.742 [2024-11-20 14:03:17.810516] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75814 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75814 ']' 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75814 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75814 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:20.742 killing process with pid 75814 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75814' 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75814 00:42:20.742 14:03:17 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75814 00:42:22.649 [2024-11-20 14:03:19.509768] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:42:22.649 [2024-11-20 14:03:19.557535] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:42:22.649 [2024-11-20 14:03:19.557688] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:42:22.649 [2024-11-20 14:03:19.566518] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:42:22.649 [2024-11-20 14:03:19.566578] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:42:22.649 [2024-11-20 14:03:19.566588] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:42:22.649 [2024-11-20 14:03:19.566619] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:42:22.649 [2024-11-20 14:03:19.566771] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:42:24.554 14:03:21 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:42:24.554 00:42:24.554 real 0m10.574s 00:42:24.554 user 0m8.306s 00:42:24.554 sys 0m3.168s 00:42:24.554 14:03:21 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:24.554 14:03:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:42:24.554 ************************************ 00:42:24.554 END TEST test_save_ublk_config 00:42:24.554 ************************************ 00:42:24.554 14:03:21 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75902 00:42:24.554 14:03:21 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:42:24.554 14:03:21 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:24.554 14:03:21 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75902 00:42:24.554 14:03:21 ublk -- common/autotest_common.sh@835 -- # '[' -z 75902 ']' 00:42:24.554 14:03:21 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:24.554 14:03:21 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:24.554 14:03:21 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:24.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:24.554 14:03:21 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:24.554 14:03:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:42:24.554 [2024-11-20 14:03:21.669121] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:24.554 [2024-11-20 14:03:21.669303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75902 ] 00:42:24.554 [2024-11-20 14:03:21.864299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:24.813 [2024-11-20 14:03:21.985866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:24.814 [2024-11-20 14:03:21.985881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:25.751 14:03:22 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:25.751 14:03:22 ublk -- common/autotest_common.sh@868 -- # return 0 00:42:25.751 14:03:22 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:42:25.751 14:03:22 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:25.751 14:03:22 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:25.751 14:03:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:42:25.751 ************************************ 00:42:25.751 START TEST test_create_ublk 00:42:25.751 ************************************ 00:42:25.751 14:03:22 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:42:25.751 14:03:22 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:42:25.751 14:03:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.751 14:03:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:25.751 [2024-11-20 14:03:22.909502] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:42:25.751 [2024-11-20 14:03:22.912287] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:42:25.751 14:03:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.751 14:03:22 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:42:25.751 14:03:22 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:42:25.751 14:03:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.751 14:03:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:26.010 14:03:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.010 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:42:26.010 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:42:26.010 14:03:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.010 14:03:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:26.010 [2024-11-20 14:03:23.234673] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:42:26.010 [2024-11-20 14:03:23.235159] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:42:26.010 [2024-11-20 14:03:23.235181] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:42:26.010 [2024-11-20 14:03:23.235191] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:42:26.010 [2024-11-20 14:03:23.242848] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:42:26.010 [2024-11-20 14:03:23.242892] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:42:26.010 [2024-11-20 14:03:23.250547] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:42:26.010 [2024-11-20 14:03:23.251195] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:42:26.010 [2024-11-20 14:03:23.280540] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:42:26.010 14:03:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.010 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:42:26.010 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:42:26.010 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:42:26.010 14:03:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.010 14:03:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:26.010 14:03:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.010 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:42:26.010 { 00:42:26.010 "ublk_device": "/dev/ublkb0", 00:42:26.010 "id": 0, 00:42:26.010 "queue_depth": 512, 00:42:26.010 "num_queues": 4, 00:42:26.010 "bdev_name": "Malloc0" 00:42:26.010 } 00:42:26.010 ]' 00:42:26.010 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:42:26.270 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:42:26.270 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:42:26.270 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:42:26.270 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:42:26.270 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:42:26.270 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:42:26.270 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:42:26.270 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:42:26.270 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:42:26.270 14:03:23 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:42:26.270 14:03:23 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:42:26.270 14:03:23 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:42:26.270 14:03:23 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:42:26.270 14:03:23 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:42:26.270 14:03:23 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:42:26.270 14:03:23 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:42:26.270 14:03:23 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:42:26.270 14:03:23 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:42:26.270 14:03:23 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:42:26.270 14:03:23 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:42:26.270 14:03:23 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:42:26.529 fio: verification read phase will never start because write phase uses all of runtime 00:42:26.529 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:42:26.529 fio-3.35 00:42:26.529 Starting 1 process 00:42:36.511 00:42:36.511 fio_test: (groupid=0, jobs=1): err= 0: pid=75954: Wed Nov 20 14:03:33 2024 00:42:36.511 write: IOPS=16.0k, BW=62.3MiB/s (65.3MB/s)(623MiB/10001msec); 0 zone resets 00:42:36.511 clat (usec): min=39, max=4025, avg=61.81, stdev=100.16 00:42:36.511 lat (usec): min=39, max=4025, avg=62.27, stdev=100.17 00:42:36.511 clat percentiles (usec): 00:42:36.511 | 1.00th=[ 42], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 56], 00:42:36.511 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:42:36.511 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 64], 95.00th=[ 67], 00:42:36.511 | 99.00th=[ 75], 99.50th=[ 81], 99.90th=[ 2024], 99.95th=[ 2835], 00:42:36.511 | 99.99th=[ 3654] 00:42:36.511 bw ( KiB/s): min=61552, max=69248, per=100.00%, avg=63877.47, stdev=1497.71, samples=19 00:42:36.511 iops : min=15388, max=17312, avg=15969.37, stdev=374.43, samples=19 00:42:36.511 lat (usec) : 50=3.00%, 100=96.75%, 250=0.04%, 500=0.01%, 750=0.01% 00:42:36.511 lat (usec) : 1000=0.02% 00:42:36.511 lat (msec) : 2=0.07%, 4=0.10%, 10=0.01% 00:42:36.511 cpu : usr=3.37%, sys=10.35%, ctx=159532, majf=0, minf=795 00:42:36.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:36.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:36.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:36.511 issued rwts: total=0,159529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:36.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:36.511 00:42:36.511 Run status group 0 (all jobs): 00:42:36.511 WRITE: bw=62.3MiB/s (65.3MB/s), 62.3MiB/s-62.3MiB/s (65.3MB/s-65.3MB/s), io=623MiB (653MB), run=10001-10001msec 00:42:36.511 00:42:36.511 Disk stats (read/write): 00:42:36.511 ublkb0: ios=0/157847, merge=0/0, ticks=0/8613, in_queue=8613, util=99.12% 00:42:36.511 14:03:33 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:42:36.511 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:36.511 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:36.511 [2024-11-20 14:03:33.798617] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:42:36.770 [2024-11-20 14:03:33.832546] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:42:36.770 [2024-11-20 14:03:33.833286] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:42:36.770 [2024-11-20 14:03:33.839519] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:42:36.770 [2024-11-20 14:03:33.839852] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:42:36.770 [2024-11-20 14:03:33.839870] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:36.770 14:03:33 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:36.770 [2024-11-20 14:03:33.850591] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:42:36.770 request: 00:42:36.770 { 00:42:36.770 "ublk_id": 0, 00:42:36.770 "method": "ublk_stop_disk", 00:42:36.770 "req_id": 1 00:42:36.770 } 00:42:36.770 Got JSON-RPC error response 00:42:36.770 response: 00:42:36.770 { 00:42:36.770 "code": -19, 00:42:36.770 "message": "No such device" 00:42:36.770 } 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:36.770 14:03:33 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:36.770 [2024-11-20 14:03:33.865593] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:42:36.770 [2024-11-20 14:03:33.873498] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:42:36.770 [2024-11-20 14:03:33.873544] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:36.770 14:03:33 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:36.770 14:03:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:37.473 14:03:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:37.473 14:03:34 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:42:37.473 14:03:34 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:42:37.473 14:03:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:37.473 14:03:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:37.473 14:03:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:37.473 14:03:34 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:42:37.473 14:03:34 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:42:37.473 14:03:34 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:42:37.473 14:03:34 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:42:37.473 14:03:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:37.473 14:03:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:37.473 14:03:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:37.473 14:03:34 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:42:37.473 14:03:34 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:42:37.473 14:03:34 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:42:37.473 00:42:37.473 real 0m11.829s 00:42:37.473 user 0m0.755s 00:42:37.473 sys 0m1.158s 00:42:37.473 14:03:34 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:37.473 14:03:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:37.473 ************************************ 00:42:37.473 END TEST test_create_ublk 00:42:37.473 ************************************ 00:42:37.473 14:03:34 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:42:37.473 14:03:34 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:37.473 14:03:34 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:37.473 14:03:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:42:37.473 ************************************ 00:42:37.473 START TEST test_create_multi_ublk 00:42:37.473 ************************************ 00:42:37.473 14:03:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:42:37.473 14:03:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:42:37.473 14:03:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:37.473 14:03:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:37.732 [2024-11-20 14:03:34.796517] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:42:37.732 [2024-11-20 14:03:34.799235] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:42:37.732 14:03:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:37.732 14:03:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:42:37.732 14:03:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:42:37.732 14:03:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:37.732 14:03:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:42:37.732 14:03:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:37.732 14:03:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:37.991 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:37.991 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:42:37.991 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:42:37.991 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:37.991 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:37.991 [2024-11-20 14:03:35.095667] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:42:37.991 [2024-11-20 14:03:35.096171] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:42:37.991 [2024-11-20 14:03:35.096191] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:42:37.991 [2024-11-20 14:03:35.096207] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:42:37.991 [2024-11-20 14:03:35.104751] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:42:37.991 [2024-11-20 14:03:35.104779] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:42:37.991 [2024-11-20 14:03:35.111509] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:42:37.991 [2024-11-20 14:03:35.112129] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:42:37.991 [2024-11-20 14:03:35.122584] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:42:37.991 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:37.991 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:42:37.991 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:37.991 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:42:37.991 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:37.991 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:38.250 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.250 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:42:38.250 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:42:38.250 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.250 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:38.250 [2024-11-20 14:03:35.432645] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:42:38.250 [2024-11-20 14:03:35.433092] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:42:38.250 [2024-11-20 14:03:35.433112] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:42:38.250 [2024-11-20 14:03:35.433121] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:42:38.250 [2024-11-20 14:03:35.440534] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:42:38.250 [2024-11-20 14:03:35.440556] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:42:38.250 [2024-11-20 14:03:35.448518] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:42:38.250 [2024-11-20 14:03:35.449090] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:42:38.250 [2024-11-20 14:03:35.464537] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:42:38.250 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.250 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:42:38.250 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:38.250 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:42:38.250 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.250 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:38.510 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.510 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:42:38.510 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:42:38.510 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.510 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:38.510 [2024-11-20 14:03:35.771658] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:42:38.510 [2024-11-20 14:03:35.772168] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:42:38.510 [2024-11-20 14:03:35.772188] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:42:38.510 [2024-11-20 14:03:35.772200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:42:38.510 [2024-11-20 14:03:35.779557] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:42:38.510 [2024-11-20 14:03:35.779589] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:42:38.510 [2024-11-20 14:03:35.787513] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:42:38.510 [2024-11-20 14:03:35.788167] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:42:38.510 [2024-11-20 14:03:35.803532] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:42:38.510 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.510 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:42:38.510 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:38.510 14:03:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:42:38.510 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.510 14:03:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:39.078 [2024-11-20 14:03:36.113678] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:42:39.078 [2024-11-20 14:03:36.114182] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:42:39.078 [2024-11-20 14:03:36.114203] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:42:39.078 [2024-11-20 14:03:36.114213] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:42:39.078 [2024-11-20 14:03:36.121573] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:42:39.078 [2024-11-20 14:03:36.121599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:42:39.078 [2024-11-20 14:03:36.129542] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:42:39.078 [2024-11-20 14:03:36.130176] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:42:39.078 [2024-11-20 14:03:36.135577] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:42:39.078 { 00:42:39.078 "ublk_device": "/dev/ublkb0", 00:42:39.078 "id": 0, 00:42:39.078 "queue_depth": 512, 00:42:39.078 "num_queues": 4, 00:42:39.078 "bdev_name": "Malloc0" 00:42:39.078 }, 00:42:39.078 { 00:42:39.078 "ublk_device": "/dev/ublkb1", 00:42:39.078 "id": 1, 00:42:39.078 "queue_depth": 512, 00:42:39.078 "num_queues": 4, 00:42:39.078 "bdev_name": "Malloc1" 00:42:39.078 }, 00:42:39.078 { 00:42:39.078 "ublk_device": "/dev/ublkb2", 00:42:39.078 "id": 2, 00:42:39.078 "queue_depth": 512, 00:42:39.078 "num_queues": 4, 00:42:39.078 "bdev_name": "Malloc2" 00:42:39.078 }, 00:42:39.078 { 00:42:39.078 "ublk_device": "/dev/ublkb3", 00:42:39.078 "id": 3, 00:42:39.078 "queue_depth": 512, 00:42:39.078 "num_queues": 4, 00:42:39.078 "bdev_name": "Malloc3" 00:42:39.078 } 00:42:39.078 ]' 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:42:39.078 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:39.338 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:42:39.338 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:42:39.338 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:42:39.338 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:42:39.338 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:42:39.338 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:42:39.338 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:42:39.338 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:42:39.338 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:42:39.338 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:42:39.338 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:39.338 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:42:39.597 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:42:39.597 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:42:39.597 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:42:39.597 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:42:39.597 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:42:39.597 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:42:39.597 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:42:39.597 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:42:39.597 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:42:39.597 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:39.597 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:42:39.597 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:42:39.597 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:42:39.856 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:42:39.856 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:42:39.856 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:42:39.856 14:03:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:42:39.856 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:42:39.856 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:42:39.856 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:42:39.856 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:42:39.856 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:42:39.856 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:39.856 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:42:39.856 14:03:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:39.856 14:03:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:39.856 [2024-11-20 14:03:37.080758] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:42:39.856 [2024-11-20 14:03:37.111247] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:42:39.856 [2024-11-20 14:03:37.112973] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:42:39.856 [2024-11-20 14:03:37.120550] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:42:39.856 [2024-11-20 14:03:37.120916] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:42:39.856 [2024-11-20 14:03:37.120937] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:42:39.856 14:03:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:39.856 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:39.856 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:42:39.856 14:03:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:39.856 14:03:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:39.856 [2024-11-20 14:03:37.136627] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:42:39.856 [2024-11-20 14:03:37.166241] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:42:39.856 [2024-11-20 14:03:37.167722] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:42:39.856 [2024-11-20 14:03:37.176525] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:42:39.856 [2024-11-20 14:03:37.176861] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:42:39.856 [2024-11-20 14:03:37.176880] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:42:40.115 14:03:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.115 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:40.115 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:42:40.115 14:03:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.115 14:03:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:40.115 [2024-11-20 14:03:37.191658] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:42:40.115 [2024-11-20 14:03:37.222311] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:42:40.115 [2024-11-20 14:03:37.223697] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:42:40.115 [2024-11-20 14:03:37.231539] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:42:40.115 [2024-11-20 14:03:37.231917] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:42:40.115 [2024-11-20 14:03:37.231938] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:42:40.115 14:03:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.115 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:40.115 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:42:40.115 14:03:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.115 14:03:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:40.115 [2024-11-20 14:03:37.247640] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:42:40.115 [2024-11-20 14:03:37.285260] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:42:40.115 [2024-11-20 14:03:37.286122] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:42:40.115 [2024-11-20 14:03:37.294542] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:42:40.115 [2024-11-20 14:03:37.294910] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:42:40.115 [2024-11-20 14:03:37.294928] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:42:40.115 14:03:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.115 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:42:40.374 [2024-11-20 14:03:37.592644] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:42:40.374 [2024-11-20 14:03:37.601787] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:42:40.374 [2024-11-20 14:03:37.601829] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:42:40.374 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:42:40.374 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:40.374 14:03:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:42:40.374 14:03:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.374 14:03:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:41.311 14:03:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.311 14:03:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:41.311 14:03:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:42:41.311 14:03:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.311 14:03:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:41.570 14:03:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.570 14:03:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:41.570 14:03:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:42:41.570 14:03:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.570 14:03:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:42.138 14:03:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.138 14:03:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:42:42.138 14:03:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:42:42.138 14:03:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.138 14:03:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:42.396 14:03:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.397 14:03:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:42:42.397 14:03:39 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:42:42.397 14:03:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.397 14:03:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:42.397 14:03:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.397 14:03:39 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:42:42.397 14:03:39 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:42:42.397 14:03:39 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:42:42.397 14:03:39 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:42:42.397 14:03:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.397 14:03:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:42.656 14:03:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.656 14:03:39 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:42:42.656 14:03:39 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:42:42.656 14:03:39 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:42:42.656 00:42:42.656 real 0m4.981s 00:42:42.656 user 0m1.136s 00:42:42.656 sys 0m0.238s 00:42:42.656 14:03:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:42.656 14:03:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:42:42.656 ************************************ 00:42:42.656 END TEST test_create_multi_ublk 00:42:42.656 ************************************ 00:42:42.656 14:03:39 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:42.656 14:03:39 ublk -- ublk/ublk.sh@147 -- # cleanup 00:42:42.656 14:03:39 ublk -- ublk/ublk.sh@130 -- # killprocess 75902 00:42:42.656 14:03:39 ublk -- common/autotest_common.sh@954 -- # '[' -z 75902 ']' 00:42:42.656 14:03:39 ublk -- common/autotest_common.sh@958 -- # kill -0 75902 00:42:42.656 14:03:39 ublk -- common/autotest_common.sh@959 -- # uname 00:42:42.656 14:03:39 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:42.656 14:03:39 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75902 00:42:42.656 14:03:39 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:42.656 14:03:39 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:42.656 14:03:39 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75902' 00:42:42.656 killing process with pid 75902 00:42:42.656 14:03:39 ublk -- common/autotest_common.sh@973 -- # kill 75902 00:42:42.656 14:03:39 ublk -- common/autotest_common.sh@978 -- # wait 75902 00:42:44.034 [2024-11-20 14:03:41.051665] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:42:44.034 [2024-11-20 14:03:41.051726] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:42:45.413 00:42:45.413 real 0m31.716s 00:42:45.413 user 0m45.865s 00:42:45.413 sys 0m10.519s 00:42:45.413 ************************************ 00:42:45.413 END TEST ublk 00:42:45.413 ************************************ 00:42:45.413 14:03:42 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:45.413 14:03:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:42:45.413 14:03:42 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:42:45.413 14:03:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:45.413 14:03:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:45.413 14:03:42 -- common/autotest_common.sh@10 -- # set +x 00:42:45.413 ************************************ 00:42:45.413 START TEST ublk_recovery 00:42:45.413 ************************************ 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:42:45.413 * Looking for test storage... 00:42:45.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:45.413 14:03:42 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:45.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.413 --rc genhtml_branch_coverage=1 00:42:45.413 --rc genhtml_function_coverage=1 00:42:45.413 --rc genhtml_legend=1 00:42:45.413 --rc geninfo_all_blocks=1 00:42:45.413 --rc geninfo_unexecuted_blocks=1 00:42:45.413 00:42:45.413 ' 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:45.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.413 --rc genhtml_branch_coverage=1 00:42:45.413 --rc genhtml_function_coverage=1 00:42:45.413 --rc genhtml_legend=1 00:42:45.413 --rc geninfo_all_blocks=1 00:42:45.413 --rc geninfo_unexecuted_blocks=1 00:42:45.413 00:42:45.413 ' 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:45.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.413 --rc genhtml_branch_coverage=1 00:42:45.413 --rc genhtml_function_coverage=1 00:42:45.413 --rc genhtml_legend=1 00:42:45.413 --rc geninfo_all_blocks=1 00:42:45.413 --rc geninfo_unexecuted_blocks=1 00:42:45.413 00:42:45.413 ' 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:45.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.413 --rc genhtml_branch_coverage=1 00:42:45.413 --rc genhtml_function_coverage=1 00:42:45.413 --rc genhtml_legend=1 00:42:45.413 --rc geninfo_all_blocks=1 00:42:45.413 --rc geninfo_unexecuted_blocks=1 00:42:45.413 00:42:45.413 ' 00:42:45.413 14:03:42 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:42:45.413 14:03:42 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:42:45.413 14:03:42 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:42:45.413 14:03:42 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:42:45.413 14:03:42 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:42:45.413 14:03:42 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:42:45.413 14:03:42 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:42:45.413 14:03:42 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:42:45.413 14:03:42 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:42:45.413 14:03:42 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:42:45.413 14:03:42 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76325 00:42:45.413 14:03:42 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:42:45.413 14:03:42 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:45.413 14:03:42 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76325 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76325 ']' 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:45.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:45.413 14:03:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:42:45.673 [2024-11-20 14:03:42.817324] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:45.673 [2024-11-20 14:03:42.817472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76325 ] 00:42:45.673 [2024-11-20 14:03:42.988485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:45.932 [2024-11-20 14:03:43.102288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:45.932 [2024-11-20 14:03:43.102319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:46.868 14:03:43 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:46.868 14:03:43 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:42:46.868 14:03:43 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:42:46.868 14:03:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.868 14:03:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:42:46.868 [2024-11-20 14:03:43.997502] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:42:46.868 [2024-11-20 14:03:44.000151] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:42:46.868 14:03:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.868 14:03:44 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:42:46.868 14:03:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.868 14:03:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:42:46.868 malloc0 00:42:46.868 14:03:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.868 14:03:44 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:42:46.868 14:03:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.868 14:03:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:42:46.868 [2024-11-20 14:03:44.169678] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:42:46.868 [2024-11-20 14:03:44.169822] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:42:46.868 [2024-11-20 14:03:44.169836] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:42:46.868 [2024-11-20 14:03:44.169848] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:42:46.868 [2024-11-20 14:03:44.178627] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:42:46.868 [2024-11-20 14:03:44.178652] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:42:46.868 [2024-11-20 14:03:44.185514] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:42:46.868 [2024-11-20 14:03:44.185665] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:42:47.126 [2024-11-20 14:03:44.200504] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:42:47.126 1 00:42:47.126 14:03:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.126 14:03:44 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:42:48.061 14:03:45 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76366 00:42:48.061 14:03:45 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:42:48.061 14:03:45 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:42:48.061 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:48.061 fio-3.35 00:42:48.061 Starting 1 process 00:42:53.331 14:03:50 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76325 00:42:53.331 14:03:50 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:42:58.603 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76325 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:42:58.603 14:03:55 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76477 00:42:58.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:58.603 14:03:55 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:58.603 14:03:55 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76477 00:42:58.603 14:03:55 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76477 ']' 00:42:58.603 14:03:55 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:42:58.603 14:03:55 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:58.603 14:03:55 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:58.603 14:03:55 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:58.603 14:03:55 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:58.603 14:03:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:42:58.603 [2024-11-20 14:03:55.366861] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:58.603 [2024-11-20 14:03:55.367041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76477 ] 00:42:58.603 [2024-11-20 14:03:55.569831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:58.603 [2024-11-20 14:03:55.749459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:58.603 [2024-11-20 14:03:55.749519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:59.540 14:03:56 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:59.540 14:03:56 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:42:59.541 14:03:56 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:42:59.541 14:03:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.541 14:03:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:42:59.541 [2024-11-20 14:03:56.717506] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:42:59.541 [2024-11-20 14:03:56.720033] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:42:59.541 14:03:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.541 14:03:56 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:42:59.541 14:03:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.541 14:03:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:42:59.800 malloc0 00:42:59.800 14:03:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.800 14:03:56 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:42:59.800 14:03:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.800 14:03:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:42:59.800 [2024-11-20 14:03:56.882672] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:42:59.800 [2024-11-20 14:03:56.882726] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:42:59.800 [2024-11-20 14:03:56.882739] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:42:59.800 [2024-11-20 14:03:56.890557] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:42:59.800 [2024-11-20 14:03:56.890599] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:42:59.800 [2024-11-20 14:03:56.890611] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:42:59.800 [2024-11-20 14:03:56.890721] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:42:59.800 1 00:42:59.800 14:03:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.800 14:03:56 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76366 00:42:59.800 [2024-11-20 14:03:56.898517] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:42:59.800 [2024-11-20 14:03:56.906040] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:42:59.800 [2024-11-20 14:03:56.913727] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:42:59.800 [2024-11-20 14:03:56.913758] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:43:56.033 00:43:56.033 fio_test: (groupid=0, jobs=1): err= 0: pid=76369: Wed Nov 20 14:04:45 2024 00:43:56.033 read: IOPS=20.1k, BW=78.4MiB/s (82.2MB/s)(4704MiB/60002msec) 00:43:56.033 slat (usec): min=2, max=702, avg= 6.68, stdev= 2.64 00:43:56.033 clat (usec): min=973, max=6707.9k, avg=3119.41, stdev=48103.81 00:43:56.033 lat (usec): min=979, max=6707.9k, avg=3126.09, stdev=48103.81 00:43:56.033 clat percentiles (usec): 00:43:56.033 | 1.00th=[ 2089], 5.00th=[ 2278], 10.00th=[ 2311], 20.00th=[ 2376], 00:43:56.034 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2737], 00:43:56.034 | 70.00th=[ 2835], 80.00th=[ 3032], 90.00th=[ 3228], 95.00th=[ 3884], 00:43:56.034 | 99.00th=[ 5407], 99.50th=[ 5997], 99.90th=[ 7439], 99.95th=[ 7963], 00:43:56.034 | 99.99th=[12649] 00:43:56.034 bw ( KiB/s): min= 2384, max=103480, per=100.00%, avg=89252.43, stdev=13391.70, samples=107 00:43:56.034 iops : min= 596, max=25870, avg=22313.09, stdev=3347.93, samples=107 00:43:56.034 write: IOPS=20.1k, BW=78.3MiB/s (82.1MB/s)(4701MiB/60002msec); 0 zone resets 00:43:56.034 slat (usec): min=2, max=721, avg= 6.71, stdev= 2.59 00:43:56.034 clat (usec): min=944, max=6708.1k, avg=3245.97, stdev=49649.29 00:43:56.034 lat (usec): min=954, max=6708.1k, avg=3252.68, stdev=49649.29 00:43:56.034 clat percentiles (usec): 00:43:56.034 | 1.00th=[ 2114], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2507], 00:43:56.034 | 30.00th=[ 2573], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2835], 00:43:56.034 | 70.00th=[ 2966], 80.00th=[ 3130], 90.00th=[ 3359], 95.00th=[ 3884], 00:43:56.034 | 99.00th=[ 5407], 99.50th=[ 6128], 99.90th=[ 7570], 99.95th=[ 8160], 00:43:56.034 | 99.99th=[12780] 00:43:56.034 bw ( KiB/s): min= 2784, max=102688, per=100.00%, avg=89193.39, stdev=13238.57, samples=107 00:43:56.034 iops : min= 696, max=25672, avg=22298.33, stdev=3309.65, samples=107 00:43:56.034 lat (usec) : 1000=0.01% 00:43:56.034 lat (msec) : 2=0.55%, 4=94.96%, 10=4.47%, 20=0.01%, >=2000=0.01% 00:43:56.034 cpu : usr=9.53%, sys=27.13%, ctx=82799, majf=0, minf=13 00:43:56.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:43:56.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:56.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:56.034 issued rwts: total=1204211,1203361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:56.034 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:56.034 00:43:56.034 Run status group 0 (all jobs): 00:43:56.034 READ: bw=78.4MiB/s (82.2MB/s), 78.4MiB/s-78.4MiB/s (82.2MB/s-82.2MB/s), io=4704MiB (4932MB), run=60002-60002msec 00:43:56.034 WRITE: bw=78.3MiB/s (82.1MB/s), 78.3MiB/s-78.3MiB/s (82.1MB/s-82.1MB/s), io=4701MiB (4929MB), run=60002-60002msec 00:43:56.034 00:43:56.034 Disk stats (read/write): 00:43:56.034 ublkb1: ios=1201438/1200598, merge=0/0, ticks=3642661/3658421, in_queue=7301083, util=99.93% 00:43:56.034 14:04:45 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:43:56.034 [2024-11-20 14:04:45.492125] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:43:56.034 [2024-11-20 14:04:45.529540] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:43:56.034 [2024-11-20 14:04:45.529764] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:43:56.034 [2024-11-20 14:04:45.538544] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:43:56.034 [2024-11-20 14:04:45.538671] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:43:56.034 [2024-11-20 14:04:45.538686] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.034 14:04:45 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:43:56.034 [2024-11-20 14:04:45.551645] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:43:56.034 [2024-11-20 14:04:45.561503] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:43:56.034 [2024-11-20 14:04:45.561556] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.034 14:04:45 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:43:56.034 14:04:45 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:43:56.034 14:04:45 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76477 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76477 ']' 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76477 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76477 00:43:56.034 killing process with pid 76477 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76477' 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76477 00:43:56.034 14:04:45 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76477 00:43:56.034 [2024-11-20 14:04:47.308251] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:43:56.034 [2024-11-20 14:04:47.308530] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:43:56.034 ************************************ 00:43:56.034 END TEST ublk_recovery 00:43:56.034 ************************************ 00:43:56.034 00:43:56.034 real 1m6.391s 00:43:56.034 user 1m48.676s 00:43:56.034 sys 0m35.271s 00:43:56.034 14:04:48 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:56.034 14:04:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:43:56.034 14:04:48 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:43:56.034 14:04:48 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:43:56.034 14:04:48 -- spdk/autotest.sh@260 -- # timing_exit lib 00:43:56.034 14:04:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:56.034 14:04:48 -- common/autotest_common.sh@10 -- # set +x 00:43:56.034 14:04:48 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:43:56.034 14:04:48 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:43:56.034 14:04:48 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:43:56.034 14:04:48 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:56.034 14:04:48 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:56.034 14:04:48 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:43:56.034 14:04:48 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:43:56.034 14:04:48 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:43:56.034 14:04:48 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:56.034 14:04:48 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:43:56.034 14:04:48 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:43:56.034 14:04:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:56.034 14:04:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:56.034 14:04:48 -- common/autotest_common.sh@10 -- # set +x 00:43:56.034 ************************************ 00:43:56.034 START TEST ftl 00:43:56.034 ************************************ 00:43:56.034 14:04:48 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:43:56.034 * Looking for test storage... 00:43:56.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:43:56.034 14:04:49 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:56.034 14:04:49 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:43:56.034 14:04:49 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:56.034 14:04:49 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:56.034 14:04:49 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:56.034 14:04:49 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:56.034 14:04:49 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:56.034 14:04:49 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:43:56.034 14:04:49 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:43:56.034 14:04:49 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:43:56.034 14:04:49 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:43:56.034 14:04:49 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:43:56.034 14:04:49 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:43:56.034 14:04:49 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:43:56.034 14:04:49 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:56.034 14:04:49 ftl -- scripts/common.sh@344 -- # case "$op" in 00:43:56.034 14:04:49 ftl -- scripts/common.sh@345 -- # : 1 00:43:56.034 14:04:49 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:56.034 14:04:49 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:56.034 14:04:49 ftl -- scripts/common.sh@365 -- # decimal 1 00:43:56.034 14:04:49 ftl -- scripts/common.sh@353 -- # local d=1 00:43:56.034 14:04:49 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:56.034 14:04:49 ftl -- scripts/common.sh@355 -- # echo 1 00:43:56.034 14:04:49 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:43:56.034 14:04:49 ftl -- scripts/common.sh@366 -- # decimal 2 00:43:56.034 14:04:49 ftl -- scripts/common.sh@353 -- # local d=2 00:43:56.034 14:04:49 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:56.034 14:04:49 ftl -- scripts/common.sh@355 -- # echo 2 00:43:56.034 14:04:49 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:43:56.034 14:04:49 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:56.034 14:04:49 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:56.034 14:04:49 ftl -- scripts/common.sh@368 -- # return 0 00:43:56.034 14:04:49 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:56.034 14:04:49 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:56.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.034 --rc genhtml_branch_coverage=1 00:43:56.034 --rc genhtml_function_coverage=1 00:43:56.034 --rc genhtml_legend=1 00:43:56.034 --rc geninfo_all_blocks=1 00:43:56.034 --rc geninfo_unexecuted_blocks=1 00:43:56.034 00:43:56.034 ' 00:43:56.034 14:04:49 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:56.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.034 --rc genhtml_branch_coverage=1 00:43:56.034 --rc genhtml_function_coverage=1 00:43:56.034 --rc genhtml_legend=1 00:43:56.034 --rc geninfo_all_blocks=1 00:43:56.034 --rc geninfo_unexecuted_blocks=1 00:43:56.034 00:43:56.034 ' 00:43:56.034 14:04:49 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:56.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.034 --rc genhtml_branch_coverage=1 00:43:56.034 --rc genhtml_function_coverage=1 00:43:56.034 --rc genhtml_legend=1 00:43:56.034 --rc geninfo_all_blocks=1 00:43:56.034 --rc geninfo_unexecuted_blocks=1 00:43:56.034 00:43:56.034 ' 00:43:56.034 14:04:49 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:56.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.034 --rc genhtml_branch_coverage=1 00:43:56.034 --rc genhtml_function_coverage=1 00:43:56.034 --rc genhtml_legend=1 00:43:56.034 --rc geninfo_all_blocks=1 00:43:56.034 --rc geninfo_unexecuted_blocks=1 00:43:56.034 00:43:56.034 ' 00:43:56.034 14:04:49 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:43:56.034 14:04:49 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:43:56.034 14:04:49 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:43:56.034 14:04:49 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:43:56.034 14:04:49 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:43:56.034 14:04:49 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:43:56.035 14:04:49 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:56.035 14:04:49 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:43:56.035 14:04:49 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:43:56.035 14:04:49 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:56.035 14:04:49 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:56.035 14:04:49 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:43:56.035 14:04:49 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:43:56.035 14:04:49 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:43:56.035 14:04:49 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:43:56.035 14:04:49 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:43:56.035 14:04:49 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:43:56.035 14:04:49 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:56.035 14:04:49 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:56.035 14:04:49 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:43:56.035 14:04:49 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:43:56.035 14:04:49 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:43:56.035 14:04:49 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:43:56.035 14:04:49 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:43:56.035 14:04:49 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:43:56.035 14:04:49 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:43:56.035 14:04:49 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:43:56.035 14:04:49 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:56.035 14:04:49 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:56.035 14:04:49 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:56.035 14:04:49 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:43:56.035 14:04:49 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:43:56.035 14:04:49 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:43:56.035 14:04:49 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:43:56.035 14:04:49 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:43:56.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:43:56.035 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:43:56.035 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:43:56.035 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:43:56.035 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:43:56.035 14:04:49 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77273 00:43:56.035 14:04:49 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:43:56.035 14:04:49 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77273 00:43:56.035 14:04:49 ftl -- common/autotest_common.sh@835 -- # '[' -z 77273 ']' 00:43:56.035 14:04:49 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:56.035 14:04:49 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:56.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:56.035 14:04:49 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:56.035 14:04:49 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:56.035 14:04:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:43:56.035 [2024-11-20 14:04:49.986560] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:43:56.035 [2024-11-20 14:04:49.986729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77273 ] 00:43:56.035 [2024-11-20 14:04:50.190504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:56.035 [2024-11-20 14:04:50.358607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:56.035 14:04:50 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:56.035 14:04:50 ftl -- common/autotest_common.sh@868 -- # return 0 00:43:56.035 14:04:50 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:43:56.035 14:04:51 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:43:56.035 14:04:52 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:43:56.035 14:04:52 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:43:56.035 14:04:52 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:43:56.035 14:04:52 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:43:56.035 14:04:52 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:43:56.035 14:04:53 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:43:56.035 14:04:53 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:43:56.035 14:04:53 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:43:56.035 14:04:53 ftl -- ftl/ftl.sh@50 -- # break 00:43:56.035 14:04:53 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:43:56.035 14:04:53 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:43:56.035 14:04:53 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:43:56.035 14:04:53 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:43:56.035 14:04:53 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:43:56.035 14:04:53 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:43:56.035 14:04:53 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:43:56.035 14:04:53 ftl -- ftl/ftl.sh@63 -- # break 00:43:56.035 14:04:53 ftl -- ftl/ftl.sh@66 -- # killprocess 77273 00:43:56.035 14:04:53 ftl -- common/autotest_common.sh@954 -- # '[' -z 77273 ']' 00:43:56.035 14:04:53 ftl -- common/autotest_common.sh@958 -- # kill -0 77273 00:43:56.035 14:04:53 ftl -- common/autotest_common.sh@959 -- # uname 00:43:56.035 14:04:53 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:56.035 14:04:53 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77273 00:43:56.035 14:04:53 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:56.035 14:04:53 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:56.035 killing process with pid 77273 00:43:56.035 14:04:53 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77273' 00:43:56.035 14:04:53 ftl -- common/autotest_common.sh@973 -- # kill 77273 00:43:56.035 14:04:53 ftl -- common/autotest_common.sh@978 -- # wait 77273 00:43:58.571 14:04:55 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:43:58.571 14:04:55 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:43:58.571 14:04:55 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:43:58.571 14:04:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:58.571 14:04:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:43:58.571 ************************************ 00:43:58.571 START TEST ftl_fio_basic 00:43:58.571 ************************************ 00:43:58.571 14:04:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:43:58.831 * Looking for test storage... 00:43:58.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:43:58.831 14:04:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:58.831 14:04:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:43:58.831 14:04:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:58.831 14:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:58.831 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:58.831 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:58.831 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:58.831 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:43:58.831 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:43:58.831 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:43:58.831 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:43:58.831 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:43:58.831 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:43:58.831 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:43:58.831 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:58.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.832 --rc genhtml_branch_coverage=1 00:43:58.832 --rc genhtml_function_coverage=1 00:43:58.832 --rc genhtml_legend=1 00:43:58.832 --rc geninfo_all_blocks=1 00:43:58.832 --rc geninfo_unexecuted_blocks=1 00:43:58.832 00:43:58.832 ' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:58.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.832 --rc genhtml_branch_coverage=1 00:43:58.832 --rc genhtml_function_coverage=1 00:43:58.832 --rc genhtml_legend=1 00:43:58.832 --rc geninfo_all_blocks=1 00:43:58.832 --rc geninfo_unexecuted_blocks=1 00:43:58.832 00:43:58.832 ' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:58.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.832 --rc genhtml_branch_coverage=1 00:43:58.832 --rc genhtml_function_coverage=1 00:43:58.832 --rc genhtml_legend=1 00:43:58.832 --rc geninfo_all_blocks=1 00:43:58.832 --rc geninfo_unexecuted_blocks=1 00:43:58.832 00:43:58.832 ' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:58.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.832 --rc genhtml_branch_coverage=1 00:43:58.832 --rc genhtml_function_coverage=1 00:43:58.832 --rc genhtml_legend=1 00:43:58.832 --rc geninfo_all_blocks=1 00:43:58.832 --rc geninfo_unexecuted_blocks=1 00:43:58.832 00:43:58.832 ' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:43:58.832 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:43:58.833 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:43:58.833 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:43:58.833 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77427 00:43:58.833 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77427 00:43:58.833 14:04:56 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:43:58.833 14:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77427 ']' 00:43:58.833 14:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:58.833 14:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:58.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:58.833 14:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:58.833 14:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:58.833 14:04:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:43:59.092 [2024-11-20 14:04:56.196251] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:43:59.092 [2024-11-20 14:04:56.196436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77427 ] 00:43:59.092 [2024-11-20 14:04:56.399669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:59.352 [2024-11-20 14:04:56.573316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:59.352 [2024-11-20 14:04:56.573363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:59.352 [2024-11-20 14:04:56.573375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:00.289 14:04:57 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:00.289 14:04:57 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:44:00.289 14:04:57 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:44:00.289 14:04:57 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:44:00.289 14:04:57 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:44:00.289 14:04:57 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:44:00.289 14:04:57 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:44:00.289 14:04:57 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:44:00.547 14:04:57 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:44:00.547 14:04:57 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:44:00.547 14:04:57 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:44:00.547 14:04:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:44:00.547 14:04:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:00.547 14:04:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:44:00.547 14:04:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:44:00.547 14:04:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:44:00.806 14:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:00.806 { 00:44:00.806 "name": "nvme0n1", 00:44:00.806 "aliases": [ 00:44:00.806 "f810e089-9de6-4ef8-a56b-0eb5ac0aa59d" 00:44:00.806 ], 00:44:00.806 "product_name": "NVMe disk", 00:44:00.806 "block_size": 4096, 00:44:00.806 "num_blocks": 1310720, 00:44:00.806 "uuid": "f810e089-9de6-4ef8-a56b-0eb5ac0aa59d", 00:44:00.806 "numa_id": -1, 00:44:00.806 "assigned_rate_limits": { 00:44:00.806 "rw_ios_per_sec": 0, 00:44:00.806 "rw_mbytes_per_sec": 0, 00:44:00.806 "r_mbytes_per_sec": 0, 00:44:00.806 "w_mbytes_per_sec": 0 00:44:00.806 }, 00:44:00.806 "claimed": false, 00:44:00.806 "zoned": false, 00:44:00.806 "supported_io_types": { 00:44:00.806 "read": true, 00:44:00.806 "write": true, 00:44:00.806 "unmap": true, 00:44:00.806 "flush": true, 00:44:00.806 "reset": true, 00:44:00.806 "nvme_admin": true, 00:44:00.806 "nvme_io": true, 00:44:00.806 "nvme_io_md": false, 00:44:00.806 "write_zeroes": true, 00:44:00.806 "zcopy": false, 00:44:00.806 "get_zone_info": false, 00:44:00.806 "zone_management": false, 00:44:00.806 "zone_append": false, 00:44:00.806 "compare": true, 00:44:00.806 "compare_and_write": false, 00:44:00.806 "abort": true, 00:44:00.806 "seek_hole": false, 00:44:00.806 "seek_data": false, 00:44:00.806 "copy": true, 00:44:00.806 "nvme_iov_md": false 00:44:00.806 }, 00:44:00.806 "driver_specific": { 00:44:00.806 "nvme": [ 00:44:00.806 { 00:44:00.806 "pci_address": "0000:00:11.0", 00:44:00.806 "trid": { 00:44:00.806 "trtype": "PCIe", 00:44:00.806 "traddr": "0000:00:11.0" 00:44:00.806 }, 00:44:00.806 "ctrlr_data": { 00:44:00.806 "cntlid": 0, 00:44:00.806 "vendor_id": "0x1b36", 00:44:00.806 "model_number": "QEMU NVMe Ctrl", 00:44:00.806 "serial_number": "12341", 00:44:00.806 "firmware_revision": "8.0.0", 00:44:00.806 "subnqn": "nqn.2019-08.org.qemu:12341", 00:44:00.806 "oacs": { 00:44:00.806 "security": 0, 00:44:00.806 "format": 1, 00:44:00.806 "firmware": 0, 00:44:00.806 "ns_manage": 1 00:44:00.806 }, 00:44:00.806 "multi_ctrlr": false, 00:44:00.806 "ana_reporting": false 00:44:00.806 }, 00:44:00.806 "vs": { 00:44:00.806 "nvme_version": "1.4" 00:44:00.806 }, 00:44:00.806 "ns_data": { 00:44:00.806 "id": 1, 00:44:00.806 "can_share": false 00:44:00.806 } 00:44:00.806 } 00:44:00.806 ], 00:44:00.806 "mp_policy": "active_passive" 00:44:00.806 } 00:44:00.806 } 00:44:00.806 ]' 00:44:00.806 14:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:00.806 14:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:44:00.806 14:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:01.065 14:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:44:01.065 14:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:44:01.066 14:04:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:44:01.066 14:04:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:44:01.066 14:04:58 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:44:01.066 14:04:58 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:44:01.066 14:04:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:44:01.066 14:04:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:44:01.356 14:04:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:44:01.356 14:04:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:44:01.614 14:04:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=20c7a283-05ff-4452-b2ba-31c9e99d047b 00:44:01.614 14:04:58 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 20c7a283-05ff-4452-b2ba-31c9e99d047b 00:44:01.873 14:04:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=56df730e-bafc-4ad1-983e-4e4a2af27e89 00:44:01.873 14:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 56df730e-bafc-4ad1-983e-4e4a2af27e89 00:44:01.873 14:04:59 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:44:01.873 14:04:59 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:44:01.873 14:04:59 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=56df730e-bafc-4ad1-983e-4e4a2af27e89 00:44:01.873 14:04:59 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:44:01.873 14:04:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 56df730e-bafc-4ad1-983e-4e4a2af27e89 00:44:01.873 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=56df730e-bafc-4ad1-983e-4e4a2af27e89 00:44:01.873 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:01.873 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:44:01.873 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:44:01.873 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 56df730e-bafc-4ad1-983e-4e4a2af27e89 00:44:02.131 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:02.131 { 00:44:02.131 "name": "56df730e-bafc-4ad1-983e-4e4a2af27e89", 00:44:02.131 "aliases": [ 00:44:02.131 "lvs/nvme0n1p0" 00:44:02.131 ], 00:44:02.131 "product_name": "Logical Volume", 00:44:02.131 "block_size": 4096, 00:44:02.131 "num_blocks": 26476544, 00:44:02.131 "uuid": "56df730e-bafc-4ad1-983e-4e4a2af27e89", 00:44:02.131 "assigned_rate_limits": { 00:44:02.131 "rw_ios_per_sec": 0, 00:44:02.131 "rw_mbytes_per_sec": 0, 00:44:02.131 "r_mbytes_per_sec": 0, 00:44:02.131 "w_mbytes_per_sec": 0 00:44:02.131 }, 00:44:02.131 "claimed": false, 00:44:02.131 "zoned": false, 00:44:02.131 "supported_io_types": { 00:44:02.131 "read": true, 00:44:02.131 "write": true, 00:44:02.131 "unmap": true, 00:44:02.131 "flush": false, 00:44:02.131 "reset": true, 00:44:02.131 "nvme_admin": false, 00:44:02.131 "nvme_io": false, 00:44:02.131 "nvme_io_md": false, 00:44:02.131 "write_zeroes": true, 00:44:02.131 "zcopy": false, 00:44:02.131 "get_zone_info": false, 00:44:02.131 "zone_management": false, 00:44:02.131 "zone_append": false, 00:44:02.131 "compare": false, 00:44:02.131 "compare_and_write": false, 00:44:02.131 "abort": false, 00:44:02.131 "seek_hole": true, 00:44:02.131 "seek_data": true, 00:44:02.131 "copy": false, 00:44:02.131 "nvme_iov_md": false 00:44:02.131 }, 00:44:02.131 "driver_specific": { 00:44:02.131 "lvol": { 00:44:02.131 "lvol_store_uuid": "20c7a283-05ff-4452-b2ba-31c9e99d047b", 00:44:02.131 "base_bdev": "nvme0n1", 00:44:02.131 "thin_provision": true, 00:44:02.131 "num_allocated_clusters": 0, 00:44:02.131 "snapshot": false, 00:44:02.131 "clone": false, 00:44:02.131 "esnap_clone": false 00:44:02.131 } 00:44:02.131 } 00:44:02.131 } 00:44:02.131 ]' 00:44:02.131 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:02.131 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:44:02.131 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:02.131 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:02.131 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:02.131 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:44:02.131 14:04:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:44:02.131 14:04:59 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:44:02.131 14:04:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:44:02.389 14:04:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:44:02.389 14:04:59 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:44:02.389 14:04:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 56df730e-bafc-4ad1-983e-4e4a2af27e89 00:44:02.389 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=56df730e-bafc-4ad1-983e-4e4a2af27e89 00:44:02.389 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:02.389 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:44:02.389 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:44:02.389 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 56df730e-bafc-4ad1-983e-4e4a2af27e89 00:44:02.647 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:02.647 { 00:44:02.647 "name": "56df730e-bafc-4ad1-983e-4e4a2af27e89", 00:44:02.647 "aliases": [ 00:44:02.647 "lvs/nvme0n1p0" 00:44:02.647 ], 00:44:02.647 "product_name": "Logical Volume", 00:44:02.647 "block_size": 4096, 00:44:02.647 "num_blocks": 26476544, 00:44:02.647 "uuid": "56df730e-bafc-4ad1-983e-4e4a2af27e89", 00:44:02.647 "assigned_rate_limits": { 00:44:02.647 "rw_ios_per_sec": 0, 00:44:02.647 "rw_mbytes_per_sec": 0, 00:44:02.647 "r_mbytes_per_sec": 0, 00:44:02.647 "w_mbytes_per_sec": 0 00:44:02.647 }, 00:44:02.647 "claimed": false, 00:44:02.647 "zoned": false, 00:44:02.647 "supported_io_types": { 00:44:02.647 "read": true, 00:44:02.647 "write": true, 00:44:02.647 "unmap": true, 00:44:02.647 "flush": false, 00:44:02.647 "reset": true, 00:44:02.647 "nvme_admin": false, 00:44:02.647 "nvme_io": false, 00:44:02.647 "nvme_io_md": false, 00:44:02.647 "write_zeroes": true, 00:44:02.647 "zcopy": false, 00:44:02.647 "get_zone_info": false, 00:44:02.647 "zone_management": false, 00:44:02.647 "zone_append": false, 00:44:02.647 "compare": false, 00:44:02.647 "compare_and_write": false, 00:44:02.647 "abort": false, 00:44:02.647 "seek_hole": true, 00:44:02.647 "seek_data": true, 00:44:02.647 "copy": false, 00:44:02.647 "nvme_iov_md": false 00:44:02.647 }, 00:44:02.647 "driver_specific": { 00:44:02.647 "lvol": { 00:44:02.647 "lvol_store_uuid": "20c7a283-05ff-4452-b2ba-31c9e99d047b", 00:44:02.647 "base_bdev": "nvme0n1", 00:44:02.647 "thin_provision": true, 00:44:02.647 "num_allocated_clusters": 0, 00:44:02.647 "snapshot": false, 00:44:02.647 "clone": false, 00:44:02.647 "esnap_clone": false 00:44:02.647 } 00:44:02.647 } 00:44:02.647 } 00:44:02.647 ]' 00:44:02.647 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:02.647 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:44:02.647 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:02.906 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:02.906 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:02.906 14:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:44:02.906 14:04:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:44:02.906 14:04:59 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:44:02.906 14:05:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:44:02.906 14:05:00 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:44:02.906 14:05:00 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:44:02.906 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:44:02.906 14:05:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 56df730e-bafc-4ad1-983e-4e4a2af27e89 00:44:02.906 14:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=56df730e-bafc-4ad1-983e-4e4a2af27e89 00:44:02.906 14:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:02.906 14:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:44:02.906 14:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:44:02.906 14:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 56df730e-bafc-4ad1-983e-4e4a2af27e89 00:44:03.165 14:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:03.165 { 00:44:03.165 "name": "56df730e-bafc-4ad1-983e-4e4a2af27e89", 00:44:03.165 "aliases": [ 00:44:03.165 "lvs/nvme0n1p0" 00:44:03.165 ], 00:44:03.165 "product_name": "Logical Volume", 00:44:03.165 "block_size": 4096, 00:44:03.165 "num_blocks": 26476544, 00:44:03.165 "uuid": "56df730e-bafc-4ad1-983e-4e4a2af27e89", 00:44:03.165 "assigned_rate_limits": { 00:44:03.165 "rw_ios_per_sec": 0, 00:44:03.165 "rw_mbytes_per_sec": 0, 00:44:03.165 "r_mbytes_per_sec": 0, 00:44:03.165 "w_mbytes_per_sec": 0 00:44:03.165 }, 00:44:03.165 "claimed": false, 00:44:03.165 "zoned": false, 00:44:03.165 "supported_io_types": { 00:44:03.165 "read": true, 00:44:03.165 "write": true, 00:44:03.165 "unmap": true, 00:44:03.165 "flush": false, 00:44:03.165 "reset": true, 00:44:03.165 "nvme_admin": false, 00:44:03.165 "nvme_io": false, 00:44:03.165 "nvme_io_md": false, 00:44:03.165 "write_zeroes": true, 00:44:03.165 "zcopy": false, 00:44:03.165 "get_zone_info": false, 00:44:03.165 "zone_management": false, 00:44:03.165 "zone_append": false, 00:44:03.165 "compare": false, 00:44:03.165 "compare_and_write": false, 00:44:03.165 "abort": false, 00:44:03.165 "seek_hole": true, 00:44:03.165 "seek_data": true, 00:44:03.165 "copy": false, 00:44:03.165 "nvme_iov_md": false 00:44:03.165 }, 00:44:03.165 "driver_specific": { 00:44:03.165 "lvol": { 00:44:03.165 "lvol_store_uuid": "20c7a283-05ff-4452-b2ba-31c9e99d047b", 00:44:03.165 "base_bdev": "nvme0n1", 00:44:03.165 "thin_provision": true, 00:44:03.165 "num_allocated_clusters": 0, 00:44:03.165 "snapshot": false, 00:44:03.165 "clone": false, 00:44:03.165 "esnap_clone": false 00:44:03.165 } 00:44:03.165 } 00:44:03.165 } 00:44:03.165 ]' 00:44:03.165 14:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:03.165 14:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:44:03.165 14:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:03.165 14:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:03.165 14:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:03.165 14:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:44:03.165 14:05:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:44:03.165 14:05:00 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:44:03.165 14:05:00 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 56df730e-bafc-4ad1-983e-4e4a2af27e89 -c nvc0n1p0 --l2p_dram_limit 60 00:44:03.424 [2024-11-20 14:05:00.731953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:03.424 [2024-11-20 14:05:00.732016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:03.424 [2024-11-20 14:05:00.732038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:44:03.424 [2024-11-20 14:05:00.732050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:03.424 [2024-11-20 14:05:00.732156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:03.424 [2024-11-20 14:05:00.732175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:03.424 [2024-11-20 14:05:00.732190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:44:03.424 [2024-11-20 14:05:00.732202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:03.424 [2024-11-20 14:05:00.732241] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:03.424 [2024-11-20 14:05:00.733502] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:03.424 [2024-11-20 14:05:00.733548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:03.424 [2024-11-20 14:05:00.733560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:03.424 [2024-11-20 14:05:00.733577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:44:03.424 [2024-11-20 14:05:00.733588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:03.424 [2024-11-20 14:05:00.733773] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9ce8b6bb-170f-4f70-a91c-5e355ec43618 00:44:03.424 [2024-11-20 14:05:00.735503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:03.424 [2024-11-20 14:05:00.735552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:44:03.424 [2024-11-20 14:05:00.735571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:44:03.424 [2024-11-20 14:05:00.735604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:03.424 [2024-11-20 14:05:00.743554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:03.424 [2024-11-20 14:05:00.743610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:03.424 [2024-11-20 14:05:00.743624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.813 ms 00:44:03.424 [2024-11-20 14:05:00.743656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:03.424 [2024-11-20 14:05:00.743848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:03.424 [2024-11-20 14:05:00.743870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:03.424 [2024-11-20 14:05:00.743884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:44:03.424 [2024-11-20 14:05:00.743902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:03.424 [2024-11-20 14:05:00.744005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:03.424 [2024-11-20 14:05:00.744026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:03.424 [2024-11-20 14:05:00.744046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:44:03.424 [2024-11-20 14:05:00.744060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:03.424 [2024-11-20 14:05:00.744121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:03.684 [2024-11-20 14:05:00.750006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:03.684 [2024-11-20 14:05:00.750053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:03.684 [2024-11-20 14:05:00.750073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.893 ms 00:44:03.684 [2024-11-20 14:05:00.750091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:03.684 [2024-11-20 14:05:00.750176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:03.684 [2024-11-20 14:05:00.750196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:03.684 [2024-11-20 14:05:00.750213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:44:03.684 [2024-11-20 14:05:00.750226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:03.684 [2024-11-20 14:05:00.750305] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:44:03.684 [2024-11-20 14:05:00.750533] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:03.684 [2024-11-20 14:05:00.750569] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:03.684 [2024-11-20 14:05:00.750587] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:03.685 [2024-11-20 14:05:00.750605] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:03.685 [2024-11-20 14:05:00.750619] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:03.685 [2024-11-20 14:05:00.750640] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:44:03.685 [2024-11-20 14:05:00.750654] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:03.685 [2024-11-20 14:05:00.750668] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:03.685 [2024-11-20 14:05:00.750679] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:03.685 [2024-11-20 14:05:00.750705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:03.685 [2024-11-20 14:05:00.750732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:03.685 [2024-11-20 14:05:00.750751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:44:03.685 [2024-11-20 14:05:00.750767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:03.685 [2024-11-20 14:05:00.750883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:03.685 [2024-11-20 14:05:00.750901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:03.685 [2024-11-20 14:05:00.750924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:44:03.685 [2024-11-20 14:05:00.750940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:03.685 [2024-11-20 14:05:00.751098] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:03.685 [2024-11-20 14:05:00.751116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:03.685 [2024-11-20 14:05:00.751144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:03.685 [2024-11-20 14:05:00.751165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:03.685 [2024-11-20 14:05:00.751183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:03.685 [2024-11-20 14:05:00.751198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:03.685 [2024-11-20 14:05:00.751219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:44:03.685 [2024-11-20 14:05:00.751233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:03.685 [2024-11-20 14:05:00.751252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:44:03.685 [2024-11-20 14:05:00.751267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:03.685 [2024-11-20 14:05:00.751293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:03.685 [2024-11-20 14:05:00.751311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:44:03.685 [2024-11-20 14:05:00.751332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:03.685 [2024-11-20 14:05:00.751346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:03.685 [2024-11-20 14:05:00.751366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:44:03.685 [2024-11-20 14:05:00.751384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:03.685 [2024-11-20 14:05:00.751407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:03.685 [2024-11-20 14:05:00.751422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:44:03.685 [2024-11-20 14:05:00.751442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:03.685 [2024-11-20 14:05:00.751458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:03.685 [2024-11-20 14:05:00.751495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:44:03.685 [2024-11-20 14:05:00.751514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:03.685 [2024-11-20 14:05:00.751534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:03.685 [2024-11-20 14:05:00.751548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:44:03.685 [2024-11-20 14:05:00.751568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:03.685 [2024-11-20 14:05:00.751586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:03.685 [2024-11-20 14:05:00.751605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:44:03.685 [2024-11-20 14:05:00.751619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:03.685 [2024-11-20 14:05:00.751635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:03.685 [2024-11-20 14:05:00.751652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:44:03.685 [2024-11-20 14:05:00.751674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:03.685 [2024-11-20 14:05:00.751692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:03.685 [2024-11-20 14:05:00.751717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:44:03.685 [2024-11-20 14:05:00.751734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:03.685 [2024-11-20 14:05:00.751751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:03.685 [2024-11-20 14:05:00.751787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:44:03.685 [2024-11-20 14:05:00.751809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:03.685 [2024-11-20 14:05:00.751822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:03.685 [2024-11-20 14:05:00.751851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:44:03.685 [2024-11-20 14:05:00.751867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:03.685 [2024-11-20 14:05:00.751886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:03.685 [2024-11-20 14:05:00.751904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:44:03.685 [2024-11-20 14:05:00.751927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:03.685 [2024-11-20 14:05:00.751945] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:03.685 [2024-11-20 14:05:00.751966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:03.685 [2024-11-20 14:05:00.751981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:03.685 [2024-11-20 14:05:00.752002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:03.685 [2024-11-20 14:05:00.752019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:03.685 [2024-11-20 14:05:00.752043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:03.685 [2024-11-20 14:05:00.752057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:03.685 [2024-11-20 14:05:00.752075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:03.685 [2024-11-20 14:05:00.752092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:03.685 [2024-11-20 14:05:00.752110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:03.685 [2024-11-20 14:05:00.752135] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:03.685 [2024-11-20 14:05:00.752164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:03.685 [2024-11-20 14:05:00.752181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:44:03.685 [2024-11-20 14:05:00.752203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:44:03.685 [2024-11-20 14:05:00.752223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:44:03.685 [2024-11-20 14:05:00.752241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:44:03.685 [2024-11-20 14:05:00.752256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:44:03.685 [2024-11-20 14:05:00.752275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:44:03.685 [2024-11-20 14:05:00.752294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:44:03.685 [2024-11-20 14:05:00.752316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:44:03.685 [2024-11-20 14:05:00.752332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:44:03.685 [2024-11-20 14:05:00.752355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:44:03.685 [2024-11-20 14:05:00.752374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:44:03.685 [2024-11-20 14:05:00.752396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:44:03.685 [2024-11-20 14:05:00.752412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:44:03.685 [2024-11-20 14:05:00.752430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:44:03.685 [2024-11-20 14:05:00.752445] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:03.685 [2024-11-20 14:05:00.752465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:03.685 [2024-11-20 14:05:00.752508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:03.685 [2024-11-20 14:05:00.752532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:03.685 [2024-11-20 14:05:00.752548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:03.685 [2024-11-20 14:05:00.752573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:03.685 [2024-11-20 14:05:00.752594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:03.685 [2024-11-20 14:05:00.752612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:03.685 [2024-11-20 14:05:00.752628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:44:03.685 [2024-11-20 14:05:00.752649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:03.685 [2024-11-20 14:05:00.752750] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:44:03.685 [2024-11-20 14:05:00.752779] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:44:06.976 [2024-11-20 14:05:04.177041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.976 [2024-11-20 14:05:04.177123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:44:06.976 [2024-11-20 14:05:04.177142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3424.269 ms 00:44:06.976 [2024-11-20 14:05:04.177158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.976 [2024-11-20 14:05:04.218556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.976 [2024-11-20 14:05:04.218621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:06.976 [2024-11-20 14:05:04.218640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.975 ms 00:44:06.976 [2024-11-20 14:05:04.218655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.976 [2024-11-20 14:05:04.218838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.976 [2024-11-20 14:05:04.218857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:06.976 [2024-11-20 14:05:04.218886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:44:06.976 [2024-11-20 14:05:04.218905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.976 [2024-11-20 14:05:04.282204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.976 [2024-11-20 14:05:04.282262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:06.976 [2024-11-20 14:05:04.282299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.219 ms 00:44:06.976 [2024-11-20 14:05:04.282316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.976 [2024-11-20 14:05:04.282373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.976 [2024-11-20 14:05:04.282389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:06.976 [2024-11-20 14:05:04.282401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:06.976 [2024-11-20 14:05:04.282414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.976 [2024-11-20 14:05:04.282946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.976 [2024-11-20 14:05:04.282973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:06.976 [2024-11-20 14:05:04.282986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:44:06.976 [2024-11-20 14:05:04.283004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.976 [2024-11-20 14:05:04.283145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.976 [2024-11-20 14:05:04.283169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:06.976 [2024-11-20 14:05:04.283182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:44:06.976 [2024-11-20 14:05:04.283198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:07.236 [2024-11-20 14:05:04.306583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:07.236 [2024-11-20 14:05:04.306643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:07.236 [2024-11-20 14:05:04.306660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.352 ms 00:44:07.236 [2024-11-20 14:05:04.306673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:07.236 [2024-11-20 14:05:04.321001] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:07.236 [2024-11-20 14:05:04.339256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:07.236 [2024-11-20 14:05:04.339343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:07.236 [2024-11-20 14:05:04.339364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.424 ms 00:44:07.236 [2024-11-20 14:05:04.339380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:07.236 [2024-11-20 14:05:04.417454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:07.236 [2024-11-20 14:05:04.417553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:44:07.236 [2024-11-20 14:05:04.417579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.999 ms 00:44:07.236 [2024-11-20 14:05:04.417591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:07.236 [2024-11-20 14:05:04.417870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:07.236 [2024-11-20 14:05:04.417899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:07.236 [2024-11-20 14:05:04.417919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:44:07.236 [2024-11-20 14:05:04.417930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:07.236 [2024-11-20 14:05:04.460282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:07.236 [2024-11-20 14:05:04.460376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:44:07.236 [2024-11-20 14:05:04.460397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.245 ms 00:44:07.236 [2024-11-20 14:05:04.460409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:07.236 [2024-11-20 14:05:04.501149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:07.236 [2024-11-20 14:05:04.501218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:44:07.236 [2024-11-20 14:05:04.501257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.636 ms 00:44:07.236 [2024-11-20 14:05:04.501269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:07.236 [2024-11-20 14:05:04.502185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:07.236 [2024-11-20 14:05:04.502218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:07.236 [2024-11-20 14:05:04.502235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:44:07.236 [2024-11-20 14:05:04.502247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:07.495 [2024-11-20 14:05:04.632361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:07.495 [2024-11-20 14:05:04.632438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:44:07.495 [2024-11-20 14:05:04.632465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 130.003 ms 00:44:07.495 [2024-11-20 14:05:04.632489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:07.495 [2024-11-20 14:05:04.674827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:07.495 [2024-11-20 14:05:04.674900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:44:07.495 [2024-11-20 14:05:04.674921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.168 ms 00:44:07.495 [2024-11-20 14:05:04.674932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:07.495 [2024-11-20 14:05:04.716248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:07.495 [2024-11-20 14:05:04.716314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:44:07.495 [2024-11-20 14:05:04.716333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.225 ms 00:44:07.495 [2024-11-20 14:05:04.716344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:07.495 [2024-11-20 14:05:04.757848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:07.495 [2024-11-20 14:05:04.757938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:07.495 [2024-11-20 14:05:04.757959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.412 ms 00:44:07.495 [2024-11-20 14:05:04.757970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:07.495 [2024-11-20 14:05:04.758065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:07.495 [2024-11-20 14:05:04.758078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:07.495 [2024-11-20 14:05:04.758100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:07.495 [2024-11-20 14:05:04.758110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:07.495 [2024-11-20 14:05:04.758292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:07.495 [2024-11-20 14:05:04.758325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:07.495 [2024-11-20 14:05:04.758341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:44:07.495 [2024-11-20 14:05:04.758352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:07.495 [2024-11-20 14:05:04.759705] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4027.181 ms, result 0 00:44:07.495 { 00:44:07.495 "name": "ftl0", 00:44:07.495 "uuid": "9ce8b6bb-170f-4f70-a91c-5e355ec43618" 00:44:07.495 } 00:44:07.495 14:05:04 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:44:07.495 14:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:44:07.495 14:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:07.495 14:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:44:07.495 14:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:07.495 14:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:07.495 14:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:44:07.754 14:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:44:08.013 [ 00:44:08.013 { 00:44:08.013 "name": "ftl0", 00:44:08.013 "aliases": [ 00:44:08.013 "9ce8b6bb-170f-4f70-a91c-5e355ec43618" 00:44:08.013 ], 00:44:08.013 "product_name": "FTL disk", 00:44:08.013 "block_size": 4096, 00:44:08.013 "num_blocks": 20971520, 00:44:08.013 "uuid": "9ce8b6bb-170f-4f70-a91c-5e355ec43618", 00:44:08.013 "assigned_rate_limits": { 00:44:08.013 "rw_ios_per_sec": 0, 00:44:08.013 "rw_mbytes_per_sec": 0, 00:44:08.013 "r_mbytes_per_sec": 0, 00:44:08.013 "w_mbytes_per_sec": 0 00:44:08.013 }, 00:44:08.013 "claimed": false, 00:44:08.013 "zoned": false, 00:44:08.013 "supported_io_types": { 00:44:08.013 "read": true, 00:44:08.013 "write": true, 00:44:08.013 "unmap": true, 00:44:08.013 "flush": true, 00:44:08.013 "reset": false, 00:44:08.013 "nvme_admin": false, 00:44:08.013 "nvme_io": false, 00:44:08.013 "nvme_io_md": false, 00:44:08.013 "write_zeroes": true, 00:44:08.013 "zcopy": false, 00:44:08.013 "get_zone_info": false, 00:44:08.013 "zone_management": false, 00:44:08.013 "zone_append": false, 00:44:08.013 "compare": false, 00:44:08.013 "compare_and_write": false, 00:44:08.013 "abort": false, 00:44:08.013 "seek_hole": false, 00:44:08.013 "seek_data": false, 00:44:08.013 "copy": false, 00:44:08.013 "nvme_iov_md": false 00:44:08.013 }, 00:44:08.013 "driver_specific": { 00:44:08.013 "ftl": { 00:44:08.013 "base_bdev": "56df730e-bafc-4ad1-983e-4e4a2af27e89", 00:44:08.013 "cache": "nvc0n1p0" 00:44:08.013 } 00:44:08.013 } 00:44:08.013 } 00:44:08.013 ] 00:44:08.013 14:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:44:08.013 14:05:05 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:44:08.013 14:05:05 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:44:08.277 14:05:05 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:44:08.277 14:05:05 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:44:08.538 [2024-11-20 14:05:05.716685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.538 [2024-11-20 14:05:05.716756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:08.538 [2024-11-20 14:05:05.716773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:08.538 [2024-11-20 14:05:05.716786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.538 [2024-11-20 14:05:05.716827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:08.538 [2024-11-20 14:05:05.721570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.538 [2024-11-20 14:05:05.721619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:08.538 [2024-11-20 14:05:05.721638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.714 ms 00:44:08.538 [2024-11-20 14:05:05.721649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.538 [2024-11-20 14:05:05.722201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.538 [2024-11-20 14:05:05.722222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:08.538 [2024-11-20 14:05:05.722237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:44:08.538 [2024-11-20 14:05:05.722247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.538 [2024-11-20 14:05:05.725052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.538 [2024-11-20 14:05:05.725082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:08.538 [2024-11-20 14:05:05.725097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:44:08.538 [2024-11-20 14:05:05.725109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.538 [2024-11-20 14:05:05.730737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.538 [2024-11-20 14:05:05.730935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:08.538 [2024-11-20 14:05:05.730966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.593 ms 00:44:08.538 [2024-11-20 14:05:05.730978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.538 [2024-11-20 14:05:05.773209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.538 [2024-11-20 14:05:05.773283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:08.538 [2024-11-20 14:05:05.773305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.117 ms 00:44:08.538 [2024-11-20 14:05:05.773317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.538 [2024-11-20 14:05:05.798444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.538 [2024-11-20 14:05:05.798547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:08.538 [2024-11-20 14:05:05.798572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.008 ms 00:44:08.538 [2024-11-20 14:05:05.798583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.538 [2024-11-20 14:05:05.798841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.538 [2024-11-20 14:05:05.798857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:08.538 [2024-11-20 14:05:05.798871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:44:08.538 [2024-11-20 14:05:05.798882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.539 [2024-11-20 14:05:05.840767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.539 [2024-11-20 14:05:05.840837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:08.539 [2024-11-20 14:05:05.840858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.844 ms 00:44:08.539 [2024-11-20 14:05:05.840870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.799 [2024-11-20 14:05:05.883753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.799 [2024-11-20 14:05:05.883816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:08.799 [2024-11-20 14:05:05.883845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.797 ms 00:44:08.799 [2024-11-20 14:05:05.883857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.799 [2024-11-20 14:05:05.928212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.799 [2024-11-20 14:05:05.928282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:08.799 [2024-11-20 14:05:05.928303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.262 ms 00:44:08.799 [2024-11-20 14:05:05.928315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.799 [2024-11-20 14:05:05.970513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.799 [2024-11-20 14:05:05.970816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:08.799 [2024-11-20 14:05:05.970850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.960 ms 00:44:08.799 [2024-11-20 14:05:05.970862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.799 [2024-11-20 14:05:05.970960] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:08.799 [2024-11-20 14:05:05.970981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.970998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:08.799 [2024-11-20 14:05:05.971814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.971840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.971852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.971885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.971897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.971913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.971925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.971954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.971968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.971984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.971996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:08.800 [2024-11-20 14:05:05.972505] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:08.800 [2024-11-20 14:05:05.972520] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9ce8b6bb-170f-4f70-a91c-5e355ec43618 00:44:08.800 [2024-11-20 14:05:05.972533] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:08.800 [2024-11-20 14:05:05.972551] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:08.800 [2024-11-20 14:05:05.972563] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:08.800 [2024-11-20 14:05:05.972583] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:08.800 [2024-11-20 14:05:05.972594] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:08.800 [2024-11-20 14:05:05.972609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:08.800 [2024-11-20 14:05:05.972629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:08.800 [2024-11-20 14:05:05.972643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:08.800 [2024-11-20 14:05:05.972654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:08.800 [2024-11-20 14:05:05.972669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.800 [2024-11-20 14:05:05.972682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:08.800 [2024-11-20 14:05:05.972698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:44:08.800 [2024-11-20 14:05:05.972710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.800 [2024-11-20 14:05:05.994602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.800 [2024-11-20 14:05:05.994674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:08.800 [2024-11-20 14:05:05.994694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.781 ms 00:44:08.800 [2024-11-20 14:05:05.994705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.800 [2024-11-20 14:05:05.995285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.800 [2024-11-20 14:05:05.995301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:08.800 [2024-11-20 14:05:05.995314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:44:08.800 [2024-11-20 14:05:05.995324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.800 [2024-11-20 14:05:06.067554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.800 [2024-11-20 14:05:06.067927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:08.800 [2024-11-20 14:05:06.067973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.800 [2024-11-20 14:05:06.067993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.800 [2024-11-20 14:05:06.068122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.800 [2024-11-20 14:05:06.068144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:08.800 [2024-11-20 14:05:06.068169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.800 [2024-11-20 14:05:06.068186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.800 [2024-11-20 14:05:06.068383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.800 [2024-11-20 14:05:06.068417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:08.800 [2024-11-20 14:05:06.068441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.800 [2024-11-20 14:05:06.068511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.800 [2024-11-20 14:05:06.068578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:08.800 [2024-11-20 14:05:06.068599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:08.800 [2024-11-20 14:05:06.068621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:08.800 [2024-11-20 14:05:06.068637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.059 [2024-11-20 14:05:06.207134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:09.059 [2024-11-20 14:05:06.207200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:09.059 [2024-11-20 14:05:06.207219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:09.059 [2024-11-20 14:05:06.207230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.059 [2024-11-20 14:05:06.320850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:09.059 [2024-11-20 14:05:06.320924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:09.059 [2024-11-20 14:05:06.320944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:09.059 [2024-11-20 14:05:06.320956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.059 [2024-11-20 14:05:06.321105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:09.059 [2024-11-20 14:05:06.321119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:09.059 [2024-11-20 14:05:06.321139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:09.059 [2024-11-20 14:05:06.321151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.059 [2024-11-20 14:05:06.321251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:09.059 [2024-11-20 14:05:06.321264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:09.059 [2024-11-20 14:05:06.321279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:09.059 [2024-11-20 14:05:06.321290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.059 [2024-11-20 14:05:06.321446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:09.059 [2024-11-20 14:05:06.321462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:09.059 [2024-11-20 14:05:06.321476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:09.059 [2024-11-20 14:05:06.321517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.059 [2024-11-20 14:05:06.321593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:09.059 [2024-11-20 14:05:06.321608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:09.059 [2024-11-20 14:05:06.321622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:09.059 [2024-11-20 14:05:06.321634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.059 [2024-11-20 14:05:06.321721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:09.059 [2024-11-20 14:05:06.321734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:09.059 [2024-11-20 14:05:06.321749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:09.059 [2024-11-20 14:05:06.321779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.060 [2024-11-20 14:05:06.321849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:09.060 [2024-11-20 14:05:06.321869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:09.060 [2024-11-20 14:05:06.321883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:09.060 [2024-11-20 14:05:06.321895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.060 [2024-11-20 14:05:06.322080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 605.367 ms, result 0 00:44:09.060 true 00:44:09.060 14:05:06 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77427 00:44:09.060 14:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77427 ']' 00:44:09.060 14:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77427 00:44:09.060 14:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:44:09.060 14:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:09.060 14:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77427 00:44:09.319 killing process with pid 77427 00:44:09.319 14:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:09.319 14:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:09.319 14:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77427' 00:44:09.319 14:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77427 00:44:09.319 14:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77427 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:44:14.592 14:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:44:14.592 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:44:14.592 fio-3.35 00:44:14.592 Starting 1 thread 00:44:21.168 00:44:21.168 test: (groupid=0, jobs=1): err= 0: pid=77645: Wed Nov 20 14:05:17 2024 00:44:21.168 read: IOPS=883, BW=58.7MiB/s (61.6MB/s)(255MiB/4336msec) 00:44:21.168 slat (usec): min=6, max=142, avg=10.67, stdev= 4.54 00:44:21.168 clat (usec): min=332, max=830, avg=501.37, stdev=67.72 00:44:21.168 lat (usec): min=340, max=850, avg=512.04, stdev=68.51 00:44:21.168 clat percentiles (usec): 00:44:21.168 | 1.00th=[ 371], 5.00th=[ 388], 10.00th=[ 408], 20.00th=[ 457], 00:44:21.168 | 30.00th=[ 465], 40.00th=[ 474], 50.00th=[ 490], 60.00th=[ 519], 00:44:21.168 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 578], 95.00th=[ 619], 00:44:21.168 | 99.00th=[ 685], 99.50th=[ 717], 99.90th=[ 783], 99.95th=[ 783], 00:44:21.168 | 99.99th=[ 832] 00:44:21.168 write: IOPS=890, BW=59.1MiB/s (62.0MB/s)(256MiB/4331msec); 0 zone resets 00:44:21.168 slat (usec): min=19, max=122, avg=25.88, stdev= 6.81 00:44:21.168 clat (usec): min=391, max=1596, avg=575.80, stdev=78.28 00:44:21.168 lat (usec): min=414, max=1623, avg=601.68, stdev=78.83 00:44:21.168 clat percentiles (usec): 00:44:21.168 | 1.00th=[ 416], 5.00th=[ 474], 10.00th=[ 486], 20.00th=[ 506], 00:44:21.168 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 586], 00:44:21.168 | 70.00th=[ 603], 80.00th=[ 635], 90.00th=[ 668], 95.00th=[ 685], 00:44:21.168 | 99.00th=[ 824], 99.50th=[ 914], 99.90th=[ 1012], 99.95th=[ 1057], 00:44:21.168 | 99.99th=[ 1598] 00:44:21.168 bw ( KiB/s): min=56984, max=63648, per=99.43%, avg=60197.00, stdev=2141.65, samples=8 00:44:21.168 iops : min= 838, max= 936, avg=885.25, stdev=31.49, samples=8 00:44:21.168 lat (usec) : 500=36.48%, 750=62.43%, 1000=0.99% 00:44:21.168 lat (msec) : 2=0.10% 00:44:21.168 cpu : usr=99.01%, sys=0.16%, ctx=7, majf=0, minf=1169 00:44:21.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:21.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:21.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:21.168 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:21.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:21.168 00:44:21.168 Run status group 0 (all jobs): 00:44:21.168 READ: bw=58.7MiB/s (61.6MB/s), 58.7MiB/s-58.7MiB/s (61.6MB/s-61.6MB/s), io=255MiB (267MB), run=4336-4336msec 00:44:21.168 WRITE: bw=59.1MiB/s (62.0MB/s), 59.1MiB/s-59.1MiB/s (62.0MB/s-62.0MB/s), io=256MiB (269MB), run=4331-4331msec 00:44:22.104 ----------------------------------------------------- 00:44:22.104 Suppressions used: 00:44:22.104 count bytes template 00:44:22.104 1 5 /usr/src/fio/parse.c 00:44:22.104 1 8 libtcmalloc_minimal.so 00:44:22.104 1 904 libcrypto.so 00:44:22.104 ----------------------------------------------------- 00:44:22.104 00:44:22.104 14:05:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:44:22.104 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:22.104 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:44:22.363 14:05:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:44:22.363 14:05:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:44:22.363 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:22.363 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:44:22.363 14:05:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:44:22.363 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:44:22.363 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:22.363 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:22.363 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:22.363 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:22.363 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:44:22.363 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:22.364 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:22.364 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:44:22.364 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:22.364 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:22.364 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:22.364 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:22.364 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:44:22.364 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:44:22.364 14:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:44:22.623 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:44:22.623 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:44:22.623 fio-3.35 00:44:22.623 Starting 2 threads 00:44:54.703 00:44:54.703 first_half: (groupid=0, jobs=1): err= 0: pid=77754: Wed Nov 20 14:05:48 2024 00:44:54.703 read: IOPS=2394, BW=9580KiB/s (9810kB/s)(255MiB/27241msec) 00:44:54.703 slat (nsec): min=3596, max=83987, avg=8452.88, stdev=3942.97 00:44:54.703 clat (usec): min=821, max=378513, avg=40025.78, stdev=21654.38 00:44:54.703 lat (usec): min=836, max=378526, avg=40034.23, stdev=21654.66 00:44:54.703 clat percentiles (msec): 00:44:54.703 | 1.00th=[ 8], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:44:54.703 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 37], 00:44:54.703 | 70.00th=[ 38], 80.00th=[ 41], 90.00th=[ 44], 95.00th=[ 50], 00:44:54.703 | 99.00th=[ 161], 99.50th=[ 180], 99.90th=[ 251], 99.95th=[ 300], 00:44:54.703 | 99.99th=[ 376] 00:44:54.703 write: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(256MiB/21588msec); 0 zone resets 00:44:54.703 slat (usec): min=5, max=936, avg=12.00, stdev= 8.93 00:44:54.703 clat (usec): min=402, max=177194, avg=13319.66, stdev=22748.60 00:44:54.703 lat (usec): min=412, max=177201, avg=13331.65, stdev=22749.32 00:44:54.703 clat percentiles (usec): 00:44:54.703 | 1.00th=[ 963], 5.00th=[ 1205], 10.00th=[ 1369], 20.00th=[ 1762], 00:44:54.703 | 30.00th=[ 3261], 40.00th=[ 4883], 50.00th=[ 6063], 60.00th=[ 7242], 00:44:54.703 | 70.00th=[ 8717], 80.00th=[ 13304], 90.00th=[ 23462], 95.00th=[ 79168], 00:44:54.703 | 99.00th=[ 94897], 99.50th=[ 98042], 99.90th=[154141], 99.95th=[175113], 00:44:54.703 | 99.99th=[177210] 00:44:54.703 bw ( KiB/s): min= 640, max=40656, per=90.77%, avg=20164.92, stdev=10596.62, samples=26 00:44:54.703 iops : min= 160, max=10164, avg=5041.23, stdev=2649.16, samples=26 00:44:54.703 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.60% 00:44:54.703 lat (msec) : 2=11.23%, 4=5.90%, 10=20.06%, 20=7.05%, 50=48.46% 00:44:54.703 lat (msec) : 100=5.09%, 250=1.49%, 500=0.05% 00:44:54.703 cpu : usr=99.09%, sys=0.23%, ctx=43, majf=0, minf=5507 00:44:54.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:44:54.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:54.703 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:54.703 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:54.703 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:54.703 second_half: (groupid=0, jobs=1): err= 0: pid=77755: Wed Nov 20 14:05:48 2024 00:44:54.703 read: IOPS=2381, BW=9526KiB/s (9755kB/s)(255MiB/27397msec) 00:44:54.703 slat (nsec): min=3829, max=54578, avg=7616.36, stdev=3004.07 00:44:54.703 clat (usec): min=901, max=372518, avg=39943.93, stdev=24406.01 00:44:54.703 lat (usec): min=909, max=372524, avg=39951.55, stdev=24406.43 00:44:54.703 clat percentiles (msec): 00:44:54.703 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 35], 00:44:54.704 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 37], 00:44:54.704 | 70.00th=[ 38], 80.00th=[ 41], 90.00th=[ 44], 95.00th=[ 48], 00:44:54.704 | 99.00th=[ 180], 99.50th=[ 199], 99.90th=[ 271], 99.95th=[ 326], 00:44:54.704 | 99.99th=[ 363] 00:44:54.704 write: IOPS=2776, BW=10.8MiB/s (11.4MB/s)(256MiB/23602msec); 0 zone resets 00:44:54.704 slat (usec): min=4, max=609, avg=10.33, stdev= 6.45 00:44:54.704 clat (usec): min=416, max=177434, avg=13713.83, stdev=23471.85 00:44:54.704 lat (usec): min=427, max=177443, avg=13724.16, stdev=23472.30 00:44:54.704 clat percentiles (usec): 00:44:54.704 | 1.00th=[ 889], 5.00th=[ 1123], 10.00th=[ 1270], 20.00th=[ 1532], 00:44:54.704 | 30.00th=[ 1975], 40.00th=[ 3851], 50.00th=[ 5276], 60.00th=[ 6652], 00:44:54.704 | 70.00th=[ 8848], 80.00th=[ 14877], 90.00th=[ 34866], 95.00th=[ 80217], 00:44:54.704 | 99.00th=[ 94897], 99.50th=[ 98042], 99.90th=[173016], 99.95th=[177210], 00:44:54.704 | 99.99th=[177210] 00:44:54.704 bw ( KiB/s): min= 520, max=48872, per=87.41%, avg=19416.26, stdev=12742.19, samples=27 00:44:54.704 iops : min= 130, max=12218, avg=4854.04, stdev=3185.54, samples=27 00:44:54.704 lat (usec) : 500=0.01%, 750=0.07%, 1000=1.10% 00:44:54.704 lat (msec) : 2=14.18%, 4=5.64%, 10=16.51%, 20=6.94%, 50=48.81% 00:44:54.704 lat (msec) : 100=5.21%, 250=1.46%, 500=0.09% 00:44:54.704 cpu : usr=99.16%, sys=0.16%, ctx=119, majf=0, minf=5598 00:44:54.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:44:54.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:54.704 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:54.704 issued rwts: total=65249,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:54.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:54.704 00:44:54.704 Run status group 0 (all jobs): 00:44:54.704 READ: bw=18.6MiB/s (19.5MB/s), 9526KiB/s-9580KiB/s (9755kB/s-9810kB/s), io=510MiB (534MB), run=27241-27397msec 00:44:54.704 WRITE: bw=21.7MiB/s (22.7MB/s), 10.8MiB/s-11.9MiB/s (11.4MB/s-12.4MB/s), io=512MiB (537MB), run=21588-23602msec 00:44:54.704 ----------------------------------------------------- 00:44:54.704 Suppressions used: 00:44:54.704 count bytes template 00:44:54.704 2 10 /usr/src/fio/parse.c 00:44:54.704 2 192 /usr/src/fio/iolog.c 00:44:54.704 1 8 libtcmalloc_minimal.so 00:44:54.704 1 904 libcrypto.so 00:44:54.704 ----------------------------------------------------- 00:44:54.704 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:44:54.704 14:05:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:44:54.704 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:44:54.704 fio-3.35 00:44:54.704 Starting 1 thread 00:45:12.839 00:45:12.839 test: (groupid=0, jobs=1): err= 0: pid=78106: Wed Nov 20 14:06:07 2024 00:45:12.839 read: IOPS=7338, BW=28.7MiB/s (30.1MB/s)(255MiB/8885msec) 00:45:12.839 slat (usec): min=3, max=118, avg= 5.89, stdev= 1.82 00:45:12.839 clat (usec): min=699, max=33794, avg=17433.35, stdev=1083.49 00:45:12.839 lat (usec): min=709, max=33798, avg=17439.24, stdev=1083.55 00:45:12.839 clat percentiles (usec): 00:45:12.839 | 1.00th=[16057], 5.00th=[16712], 10.00th=[16909], 20.00th=[16909], 00:45:12.839 | 30.00th=[17171], 40.00th=[17171], 50.00th=[17171], 60.00th=[17433], 00:45:12.839 | 70.00th=[17433], 80.00th=[17695], 90.00th=[17957], 95.00th=[18482], 00:45:12.839 | 99.00th=[22414], 99.50th=[23725], 99.90th=[25560], 99.95th=[29754], 00:45:12.839 | 99.99th=[33162] 00:45:12.839 write: IOPS=12.6k, BW=49.1MiB/s (51.4MB/s)(256MiB/5218msec); 0 zone resets 00:45:12.839 slat (usec): min=4, max=1207, avg= 8.38, stdev= 9.19 00:45:12.839 clat (usec): min=571, max=70390, avg=10141.86, stdev=12247.39 00:45:12.839 lat (usec): min=580, max=70399, avg=10150.25, stdev=12247.37 00:45:12.839 clat percentiles (usec): 00:45:12.839 | 1.00th=[ 873], 5.00th=[ 1012], 10.00th=[ 1106], 20.00th=[ 1270], 00:45:12.839 | 30.00th=[ 1467], 40.00th=[ 1909], 50.00th=[ 7308], 60.00th=[ 8455], 00:45:12.839 | 70.00th=[ 9634], 80.00th=[11076], 90.00th=[34866], 95.00th=[37487], 00:45:12.839 | 99.00th=[45351], 99.50th=[46924], 99.90th=[51643], 99.95th=[54264], 00:45:12.839 | 99.99th=[68682] 00:45:12.839 bw ( KiB/s): min=16528, max=60216, per=94.87%, avg=47662.55, stdev=11864.98, samples=11 00:45:12.839 iops : min= 4132, max=15054, avg=11915.64, stdev=2966.25, samples=11 00:45:12.839 lat (usec) : 750=0.06%, 1000=2.27% 00:45:12.839 lat (msec) : 2=17.95%, 4=0.80%, 10=15.47%, 20=53.57%, 50=9.79% 00:45:12.839 lat (msec) : 100=0.09% 00:45:12.839 cpu : usr=98.73%, sys=0.43%, ctx=26, majf=0, minf=5565 00:45:12.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:45:12.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.839 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:45:12.839 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:12.839 latency : target=0, window=0, percentile=100.00%, depth=128 00:45:12.839 00:45:12.839 Run status group 0 (all jobs): 00:45:12.839 READ: bw=28.7MiB/s (30.1MB/s), 28.7MiB/s-28.7MiB/s (30.1MB/s-30.1MB/s), io=255MiB (267MB), run=8885-8885msec 00:45:12.839 WRITE: bw=49.1MiB/s (51.4MB/s), 49.1MiB/s-49.1MiB/s (51.4MB/s-51.4MB/s), io=256MiB (268MB), run=5218-5218msec 00:45:12.839 ----------------------------------------------------- 00:45:12.839 Suppressions used: 00:45:12.839 count bytes template 00:45:12.839 1 5 /usr/src/fio/parse.c 00:45:12.839 2 192 /usr/src/fio/iolog.c 00:45:12.839 1 8 libtcmalloc_minimal.so 00:45:12.839 1 904 libcrypto.so 00:45:12.839 ----------------------------------------------------- 00:45:12.839 00:45:12.839 14:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:45:12.839 14:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:12.839 14:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:45:12.839 14:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:12.839 14:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:45:12.839 Remove shared memory files 00:45:12.839 14:06:09 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:45:12.839 14:06:09 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:45:12.839 14:06:09 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:45:12.839 14:06:09 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58016 /dev/shm/spdk_tgt_trace.pid76325 00:45:12.839 14:06:09 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:45:12.839 14:06:09 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:45:12.839 ************************************ 00:45:12.839 END TEST ftl_fio_basic 00:45:12.839 ************************************ 00:45:12.839 00:45:12.839 real 1m13.510s 00:45:12.839 user 2m41.138s 00:45:12.839 sys 0m4.109s 00:45:12.839 14:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:12.839 14:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:45:12.839 14:06:09 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:45:12.839 14:06:09 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:45:12.839 14:06:09 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:12.839 14:06:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:45:12.839 ************************************ 00:45:12.839 START TEST ftl_bdevperf 00:45:12.839 ************************************ 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:45:12.839 * Looking for test storage... 00:45:12.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:12.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:12.839 --rc genhtml_branch_coverage=1 00:45:12.839 --rc genhtml_function_coverage=1 00:45:12.839 --rc genhtml_legend=1 00:45:12.839 --rc geninfo_all_blocks=1 00:45:12.839 --rc geninfo_unexecuted_blocks=1 00:45:12.839 00:45:12.839 ' 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:12.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:12.839 --rc genhtml_branch_coverage=1 00:45:12.839 --rc genhtml_function_coverage=1 00:45:12.839 --rc genhtml_legend=1 00:45:12.839 --rc geninfo_all_blocks=1 00:45:12.839 --rc geninfo_unexecuted_blocks=1 00:45:12.839 00:45:12.839 ' 00:45:12.839 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:12.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:12.839 --rc genhtml_branch_coverage=1 00:45:12.839 --rc genhtml_function_coverage=1 00:45:12.839 --rc genhtml_legend=1 00:45:12.840 --rc geninfo_all_blocks=1 00:45:12.840 --rc geninfo_unexecuted_blocks=1 00:45:12.840 00:45:12.840 ' 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:12.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:12.840 --rc genhtml_branch_coverage=1 00:45:12.840 --rc genhtml_function_coverage=1 00:45:12.840 --rc genhtml_legend=1 00:45:12.840 --rc geninfo_all_blocks=1 00:45:12.840 --rc geninfo_unexecuted_blocks=1 00:45:12.840 00:45:12.840 ' 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78352 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78352 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78352 ']' 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:12.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:12.840 14:06:09 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:45:12.840 [2024-11-20 14:06:09.766150] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:45:12.840 [2024-11-20 14:06:09.766331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78352 ] 00:45:12.840 [2024-11-20 14:06:09.971419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:12.840 [2024-11-20 14:06:10.148233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:13.775 14:06:10 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:13.775 14:06:10 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:45:13.775 14:06:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:45:13.775 14:06:10 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:45:13.775 14:06:10 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:45:13.775 14:06:10 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:45:13.775 14:06:10 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:45:13.775 14:06:10 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:45:14.033 14:06:11 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:45:14.033 14:06:11 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:45:14.033 14:06:11 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:45:14.033 14:06:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:45:14.033 14:06:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:14.033 14:06:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:45:14.033 14:06:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:45:14.033 14:06:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:45:14.291 14:06:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:14.291 { 00:45:14.291 "name": "nvme0n1", 00:45:14.291 "aliases": [ 00:45:14.291 "6b5c417a-c08a-4a6c-9db2-3796a3269115" 00:45:14.291 ], 00:45:14.291 "product_name": "NVMe disk", 00:45:14.291 "block_size": 4096, 00:45:14.291 "num_blocks": 1310720, 00:45:14.291 "uuid": "6b5c417a-c08a-4a6c-9db2-3796a3269115", 00:45:14.291 "numa_id": -1, 00:45:14.291 "assigned_rate_limits": { 00:45:14.291 "rw_ios_per_sec": 0, 00:45:14.291 "rw_mbytes_per_sec": 0, 00:45:14.291 "r_mbytes_per_sec": 0, 00:45:14.291 "w_mbytes_per_sec": 0 00:45:14.291 }, 00:45:14.291 "claimed": true, 00:45:14.291 "claim_type": "read_many_write_one", 00:45:14.291 "zoned": false, 00:45:14.291 "supported_io_types": { 00:45:14.291 "read": true, 00:45:14.291 "write": true, 00:45:14.291 "unmap": true, 00:45:14.291 "flush": true, 00:45:14.291 "reset": true, 00:45:14.291 "nvme_admin": true, 00:45:14.291 "nvme_io": true, 00:45:14.291 "nvme_io_md": false, 00:45:14.291 "write_zeroes": true, 00:45:14.291 "zcopy": false, 00:45:14.291 "get_zone_info": false, 00:45:14.291 "zone_management": false, 00:45:14.291 "zone_append": false, 00:45:14.291 "compare": true, 00:45:14.291 "compare_and_write": false, 00:45:14.291 "abort": true, 00:45:14.291 "seek_hole": false, 00:45:14.291 "seek_data": false, 00:45:14.291 "copy": true, 00:45:14.291 "nvme_iov_md": false 00:45:14.291 }, 00:45:14.291 "driver_specific": { 00:45:14.291 "nvme": [ 00:45:14.291 { 00:45:14.291 "pci_address": "0000:00:11.0", 00:45:14.291 "trid": { 00:45:14.291 "trtype": "PCIe", 00:45:14.291 "traddr": "0000:00:11.0" 00:45:14.291 }, 00:45:14.292 "ctrlr_data": { 00:45:14.292 "cntlid": 0, 00:45:14.292 "vendor_id": "0x1b36", 00:45:14.292 "model_number": "QEMU NVMe Ctrl", 00:45:14.292 "serial_number": "12341", 00:45:14.292 "firmware_revision": "8.0.0", 00:45:14.292 "subnqn": "nqn.2019-08.org.qemu:12341", 00:45:14.292 "oacs": { 00:45:14.292 "security": 0, 00:45:14.292 "format": 1, 00:45:14.292 "firmware": 0, 00:45:14.292 "ns_manage": 1 00:45:14.292 }, 00:45:14.292 "multi_ctrlr": false, 00:45:14.292 "ana_reporting": false 00:45:14.292 }, 00:45:14.292 "vs": { 00:45:14.292 "nvme_version": "1.4" 00:45:14.292 }, 00:45:14.292 "ns_data": { 00:45:14.292 "id": 1, 00:45:14.292 "can_share": false 00:45:14.292 } 00:45:14.292 } 00:45:14.292 ], 00:45:14.292 "mp_policy": "active_passive" 00:45:14.292 } 00:45:14.292 } 00:45:14.292 ]' 00:45:14.292 14:06:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:14.292 14:06:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:45:14.292 14:06:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:14.292 14:06:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:45:14.292 14:06:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:45:14.292 14:06:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:45:14.292 14:06:11 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:45:14.292 14:06:11 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:45:14.292 14:06:11 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:45:14.292 14:06:11 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:45:14.292 14:06:11 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:45:14.550 14:06:11 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=20c7a283-05ff-4452-b2ba-31c9e99d047b 00:45:14.550 14:06:11 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:45:14.550 14:06:11 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 20c7a283-05ff-4452-b2ba-31c9e99d047b 00:45:14.808 14:06:12 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:45:15.066 14:06:12 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f1d96004-2d93-4226-b5bb-419af22fc71f 00:45:15.066 14:06:12 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f1d96004-2d93-4226-b5bb-419af22fc71f 00:45:15.324 14:06:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=fc9df09a-b921-41fc-a246-b7e38e76c8f8 00:45:15.324 14:06:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fc9df09a-b921-41fc-a246-b7e38e76c8f8 00:45:15.324 14:06:12 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:45:15.324 14:06:12 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:45:15.324 14:06:12 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=fc9df09a-b921-41fc-a246-b7e38e76c8f8 00:45:15.324 14:06:12 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:45:15.324 14:06:12 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size fc9df09a-b921-41fc-a246-b7e38e76c8f8 00:45:15.324 14:06:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fc9df09a-b921-41fc-a246-b7e38e76c8f8 00:45:15.324 14:06:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:15.324 14:06:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:45:15.324 14:06:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:45:15.324 14:06:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fc9df09a-b921-41fc-a246-b7e38e76c8f8 00:45:15.911 14:06:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:15.911 { 00:45:15.911 "name": "fc9df09a-b921-41fc-a246-b7e38e76c8f8", 00:45:15.911 "aliases": [ 00:45:15.911 "lvs/nvme0n1p0" 00:45:15.911 ], 00:45:15.911 "product_name": "Logical Volume", 00:45:15.911 "block_size": 4096, 00:45:15.911 "num_blocks": 26476544, 00:45:15.911 "uuid": "fc9df09a-b921-41fc-a246-b7e38e76c8f8", 00:45:15.911 "assigned_rate_limits": { 00:45:15.911 "rw_ios_per_sec": 0, 00:45:15.911 "rw_mbytes_per_sec": 0, 00:45:15.911 "r_mbytes_per_sec": 0, 00:45:15.911 "w_mbytes_per_sec": 0 00:45:15.911 }, 00:45:15.911 "claimed": false, 00:45:15.911 "zoned": false, 00:45:15.911 "supported_io_types": { 00:45:15.911 "read": true, 00:45:15.911 "write": true, 00:45:15.911 "unmap": true, 00:45:15.911 "flush": false, 00:45:15.911 "reset": true, 00:45:15.911 "nvme_admin": false, 00:45:15.911 "nvme_io": false, 00:45:15.911 "nvme_io_md": false, 00:45:15.912 "write_zeroes": true, 00:45:15.912 "zcopy": false, 00:45:15.912 "get_zone_info": false, 00:45:15.912 "zone_management": false, 00:45:15.912 "zone_append": false, 00:45:15.912 "compare": false, 00:45:15.912 "compare_and_write": false, 00:45:15.912 "abort": false, 00:45:15.912 "seek_hole": true, 00:45:15.912 "seek_data": true, 00:45:15.912 "copy": false, 00:45:15.912 "nvme_iov_md": false 00:45:15.912 }, 00:45:15.912 "driver_specific": { 00:45:15.912 "lvol": { 00:45:15.912 "lvol_store_uuid": "f1d96004-2d93-4226-b5bb-419af22fc71f", 00:45:15.912 "base_bdev": "nvme0n1", 00:45:15.912 "thin_provision": true, 00:45:15.912 "num_allocated_clusters": 0, 00:45:15.912 "snapshot": false, 00:45:15.912 "clone": false, 00:45:15.912 "esnap_clone": false 00:45:15.912 } 00:45:15.912 } 00:45:15.912 } 00:45:15.912 ]' 00:45:15.912 14:06:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:15.912 14:06:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:45:15.912 14:06:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:15.912 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:45:15.912 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:45:15.912 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:45:15.912 14:06:13 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:45:15.912 14:06:13 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:45:15.912 14:06:13 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:45:16.174 14:06:13 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:45:16.174 14:06:13 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:45:16.174 14:06:13 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size fc9df09a-b921-41fc-a246-b7e38e76c8f8 00:45:16.174 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fc9df09a-b921-41fc-a246-b7e38e76c8f8 00:45:16.174 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:16.174 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:45:16.174 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:45:16.174 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fc9df09a-b921-41fc-a246-b7e38e76c8f8 00:45:16.432 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:16.432 { 00:45:16.432 "name": "fc9df09a-b921-41fc-a246-b7e38e76c8f8", 00:45:16.432 "aliases": [ 00:45:16.432 "lvs/nvme0n1p0" 00:45:16.432 ], 00:45:16.432 "product_name": "Logical Volume", 00:45:16.432 "block_size": 4096, 00:45:16.432 "num_blocks": 26476544, 00:45:16.432 "uuid": "fc9df09a-b921-41fc-a246-b7e38e76c8f8", 00:45:16.432 "assigned_rate_limits": { 00:45:16.432 "rw_ios_per_sec": 0, 00:45:16.432 "rw_mbytes_per_sec": 0, 00:45:16.432 "r_mbytes_per_sec": 0, 00:45:16.432 "w_mbytes_per_sec": 0 00:45:16.432 }, 00:45:16.432 "claimed": false, 00:45:16.432 "zoned": false, 00:45:16.432 "supported_io_types": { 00:45:16.432 "read": true, 00:45:16.432 "write": true, 00:45:16.432 "unmap": true, 00:45:16.432 "flush": false, 00:45:16.432 "reset": true, 00:45:16.432 "nvme_admin": false, 00:45:16.432 "nvme_io": false, 00:45:16.432 "nvme_io_md": false, 00:45:16.432 "write_zeroes": true, 00:45:16.432 "zcopy": false, 00:45:16.432 "get_zone_info": false, 00:45:16.432 "zone_management": false, 00:45:16.432 "zone_append": false, 00:45:16.432 "compare": false, 00:45:16.432 "compare_and_write": false, 00:45:16.432 "abort": false, 00:45:16.432 "seek_hole": true, 00:45:16.432 "seek_data": true, 00:45:16.432 "copy": false, 00:45:16.432 "nvme_iov_md": false 00:45:16.432 }, 00:45:16.432 "driver_specific": { 00:45:16.432 "lvol": { 00:45:16.432 "lvol_store_uuid": "f1d96004-2d93-4226-b5bb-419af22fc71f", 00:45:16.432 "base_bdev": "nvme0n1", 00:45:16.432 "thin_provision": true, 00:45:16.432 "num_allocated_clusters": 0, 00:45:16.432 "snapshot": false, 00:45:16.432 "clone": false, 00:45:16.432 "esnap_clone": false 00:45:16.432 } 00:45:16.432 } 00:45:16.432 } 00:45:16.432 ]' 00:45:16.432 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:16.432 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:45:16.432 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:16.433 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:45:16.433 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:45:16.433 14:06:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:45:16.433 14:06:13 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:45:16.433 14:06:13 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:45:16.999 14:06:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:45:16.999 14:06:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size fc9df09a-b921-41fc-a246-b7e38e76c8f8 00:45:16.999 14:06:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fc9df09a-b921-41fc-a246-b7e38e76c8f8 00:45:16.999 14:06:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:16.999 14:06:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:45:16.999 14:06:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:45:16.999 14:06:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fc9df09a-b921-41fc-a246-b7e38e76c8f8 00:45:16.999 14:06:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:16.999 { 00:45:16.999 "name": "fc9df09a-b921-41fc-a246-b7e38e76c8f8", 00:45:16.999 "aliases": [ 00:45:16.999 "lvs/nvme0n1p0" 00:45:16.999 ], 00:45:16.999 "product_name": "Logical Volume", 00:45:16.999 "block_size": 4096, 00:45:16.999 "num_blocks": 26476544, 00:45:16.999 "uuid": "fc9df09a-b921-41fc-a246-b7e38e76c8f8", 00:45:16.999 "assigned_rate_limits": { 00:45:16.999 "rw_ios_per_sec": 0, 00:45:16.999 "rw_mbytes_per_sec": 0, 00:45:16.999 "r_mbytes_per_sec": 0, 00:45:16.999 "w_mbytes_per_sec": 0 00:45:16.999 }, 00:45:16.999 "claimed": false, 00:45:16.999 "zoned": false, 00:45:16.999 "supported_io_types": { 00:45:16.999 "read": true, 00:45:16.999 "write": true, 00:45:16.999 "unmap": true, 00:45:16.999 "flush": false, 00:45:16.999 "reset": true, 00:45:16.999 "nvme_admin": false, 00:45:16.999 "nvme_io": false, 00:45:16.999 "nvme_io_md": false, 00:45:16.999 "write_zeroes": true, 00:45:16.999 "zcopy": false, 00:45:16.999 "get_zone_info": false, 00:45:16.999 "zone_management": false, 00:45:16.999 "zone_append": false, 00:45:16.999 "compare": false, 00:45:16.999 "compare_and_write": false, 00:45:16.999 "abort": false, 00:45:16.999 "seek_hole": true, 00:45:16.999 "seek_data": true, 00:45:16.999 "copy": false, 00:45:16.999 "nvme_iov_md": false 00:45:16.999 }, 00:45:16.999 "driver_specific": { 00:45:16.999 "lvol": { 00:45:16.999 "lvol_store_uuid": "f1d96004-2d93-4226-b5bb-419af22fc71f", 00:45:16.999 "base_bdev": "nvme0n1", 00:45:16.999 "thin_provision": true, 00:45:17.000 "num_allocated_clusters": 0, 00:45:17.000 "snapshot": false, 00:45:17.000 "clone": false, 00:45:17.000 "esnap_clone": false 00:45:17.000 } 00:45:17.000 } 00:45:17.000 } 00:45:17.000 ]' 00:45:17.000 14:06:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:17.258 14:06:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:45:17.258 14:06:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:17.258 14:06:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:45:17.258 14:06:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:45:17.258 14:06:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:45:17.258 14:06:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:45:17.258 14:06:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fc9df09a-b921-41fc-a246-b7e38e76c8f8 -c nvc0n1p0 --l2p_dram_limit 20 00:45:17.518 [2024-11-20 14:06:14.653749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:17.518 [2024-11-20 14:06:14.653814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:17.518 [2024-11-20 14:06:14.653831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:45:17.518 [2024-11-20 14:06:14.653845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:17.518 [2024-11-20 14:06:14.653912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:17.518 [2024-11-20 14:06:14.653931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:17.518 [2024-11-20 14:06:14.653942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:45:17.518 [2024-11-20 14:06:14.653955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:17.518 [2024-11-20 14:06:14.653975] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:17.518 [2024-11-20 14:06:14.655101] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:17.518 [2024-11-20 14:06:14.655125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:17.518 [2024-11-20 14:06:14.655140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:17.518 [2024-11-20 14:06:14.655152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.156 ms 00:45:17.518 [2024-11-20 14:06:14.655166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:17.518 [2024-11-20 14:06:14.655252] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4fe9913f-2a9c-4f4e-8333-7be23b4ec047 00:45:17.518 [2024-11-20 14:06:14.656807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:17.518 [2024-11-20 14:06:14.656838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:45:17.518 [2024-11-20 14:06:14.656854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:45:17.518 [2024-11-20 14:06:14.656870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:17.518 [2024-11-20 14:06:14.664515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:17.518 [2024-11-20 14:06:14.664547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:17.518 [2024-11-20 14:06:14.664580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.596 ms 00:45:17.518 [2024-11-20 14:06:14.664592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:17.518 [2024-11-20 14:06:14.664718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:17.518 [2024-11-20 14:06:14.664734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:17.518 [2024-11-20 14:06:14.664753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:45:17.518 [2024-11-20 14:06:14.664763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:17.518 [2024-11-20 14:06:14.664833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:17.518 [2024-11-20 14:06:14.664845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:17.518 [2024-11-20 14:06:14.664859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:45:17.518 [2024-11-20 14:06:14.664869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:17.518 [2024-11-20 14:06:14.664897] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:17.518 [2024-11-20 14:06:14.670346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:17.518 [2024-11-20 14:06:14.670382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:17.518 [2024-11-20 14:06:14.670395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.459 ms 00:45:17.518 [2024-11-20 14:06:14.670412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:17.518 [2024-11-20 14:06:14.670445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:17.518 [2024-11-20 14:06:14.670459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:17.518 [2024-11-20 14:06:14.670470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:45:17.518 [2024-11-20 14:06:14.670491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:17.518 [2024-11-20 14:06:14.670539] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:45:17.518 [2024-11-20 14:06:14.670678] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:17.518 [2024-11-20 14:06:14.670692] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:17.518 [2024-11-20 14:06:14.670709] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:17.518 [2024-11-20 14:06:14.670722] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:17.518 [2024-11-20 14:06:14.670737] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:17.518 [2024-11-20 14:06:14.670749] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:45:17.518 [2024-11-20 14:06:14.670762] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:17.518 [2024-11-20 14:06:14.670772] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:17.518 [2024-11-20 14:06:14.670785] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:17.518 [2024-11-20 14:06:14.670795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:17.518 [2024-11-20 14:06:14.670811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:17.518 [2024-11-20 14:06:14.670838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:45:17.518 [2024-11-20 14:06:14.670852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:17.518 [2024-11-20 14:06:14.670940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:17.518 [2024-11-20 14:06:14.670955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:17.518 [2024-11-20 14:06:14.670966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:45:17.518 [2024-11-20 14:06:14.670980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:17.518 [2024-11-20 14:06:14.671062] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:17.518 [2024-11-20 14:06:14.671076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:17.518 [2024-11-20 14:06:14.671089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:17.518 [2024-11-20 14:06:14.671102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:17.518 [2024-11-20 14:06:14.671112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:17.518 [2024-11-20 14:06:14.671124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:17.518 [2024-11-20 14:06:14.671134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:45:17.518 [2024-11-20 14:06:14.671148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:17.518 [2024-11-20 14:06:14.671157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:45:17.518 [2024-11-20 14:06:14.671169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:17.518 [2024-11-20 14:06:14.671179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:17.518 [2024-11-20 14:06:14.671191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:45:17.518 [2024-11-20 14:06:14.671200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:17.519 [2024-11-20 14:06:14.671224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:17.519 [2024-11-20 14:06:14.671234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:45:17.519 [2024-11-20 14:06:14.671249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:17.519 [2024-11-20 14:06:14.671258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:17.519 [2024-11-20 14:06:14.671270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:45:17.519 [2024-11-20 14:06:14.671280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:17.519 [2024-11-20 14:06:14.671293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:17.519 [2024-11-20 14:06:14.671303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:45:17.519 [2024-11-20 14:06:14.671315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:17.519 [2024-11-20 14:06:14.671324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:17.519 [2024-11-20 14:06:14.671335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:45:17.519 [2024-11-20 14:06:14.671345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:17.519 [2024-11-20 14:06:14.671356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:17.519 [2024-11-20 14:06:14.671366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:45:17.519 [2024-11-20 14:06:14.671377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:17.519 [2024-11-20 14:06:14.671386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:17.519 [2024-11-20 14:06:14.671398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:45:17.519 [2024-11-20 14:06:14.671407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:17.519 [2024-11-20 14:06:14.671421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:17.519 [2024-11-20 14:06:14.671431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:45:17.519 [2024-11-20 14:06:14.671442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:17.519 [2024-11-20 14:06:14.671451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:17.519 [2024-11-20 14:06:14.671463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:45:17.519 [2024-11-20 14:06:14.671472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:17.519 [2024-11-20 14:06:14.671484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:17.519 [2024-11-20 14:06:14.671493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:45:17.519 [2024-11-20 14:06:14.671530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:17.519 [2024-11-20 14:06:14.671541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:17.519 [2024-11-20 14:06:14.671553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:45:17.519 [2024-11-20 14:06:14.671562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:17.519 [2024-11-20 14:06:14.671574] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:17.519 [2024-11-20 14:06:14.671584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:17.519 [2024-11-20 14:06:14.671596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:17.519 [2024-11-20 14:06:14.671607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:17.519 [2024-11-20 14:06:14.671624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:17.519 [2024-11-20 14:06:14.671633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:17.519 [2024-11-20 14:06:14.671645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:17.519 [2024-11-20 14:06:14.671654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:17.519 [2024-11-20 14:06:14.671666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:17.519 [2024-11-20 14:06:14.671675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:17.519 [2024-11-20 14:06:14.671692] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:17.519 [2024-11-20 14:06:14.671705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:17.519 [2024-11-20 14:06:14.671720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:45:17.519 [2024-11-20 14:06:14.671731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:45:17.519 [2024-11-20 14:06:14.671743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:45:17.519 [2024-11-20 14:06:14.671754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:45:17.519 [2024-11-20 14:06:14.671767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:45:17.519 [2024-11-20 14:06:14.671778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:45:17.519 [2024-11-20 14:06:14.671800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:45:17.519 [2024-11-20 14:06:14.671811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:45:17.519 [2024-11-20 14:06:14.671844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:45:17.519 [2024-11-20 14:06:14.671855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:45:17.519 [2024-11-20 14:06:14.671869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:45:17.519 [2024-11-20 14:06:14.671880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:45:17.519 [2024-11-20 14:06:14.671894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:45:17.519 [2024-11-20 14:06:14.671906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:45:17.519 [2024-11-20 14:06:14.671920] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:17.519 [2024-11-20 14:06:14.671933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:17.519 [2024-11-20 14:06:14.671951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:17.519 [2024-11-20 14:06:14.671963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:17.519 [2024-11-20 14:06:14.671977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:17.519 [2024-11-20 14:06:14.671988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:17.519 [2024-11-20 14:06:14.672003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:17.519 [2024-11-20 14:06:14.672018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:17.519 [2024-11-20 14:06:14.672031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:45:17.519 [2024-11-20 14:06:14.672043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:17.519 [2024-11-20 14:06:14.672088] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:45:17.519 [2024-11-20 14:06:14.672102] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:45:20.808 [2024-11-20 14:06:18.092967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.808 [2024-11-20 14:06:18.093035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:45:20.808 [2024-11-20 14:06:18.093063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3420.851 ms 00:45:20.808 [2024-11-20 14:06:18.093076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.067 [2024-11-20 14:06:18.137281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.067 [2024-11-20 14:06:18.137338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:21.067 [2024-11-20 14:06:18.137357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.835 ms 00:45:21.067 [2024-11-20 14:06:18.137369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.067 [2024-11-20 14:06:18.137545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.067 [2024-11-20 14:06:18.137560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:21.067 [2024-11-20 14:06:18.137578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:45:21.067 [2024-11-20 14:06:18.137589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.067 [2024-11-20 14:06:18.196843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.067 [2024-11-20 14:06:18.196897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:21.067 [2024-11-20 14:06:18.196918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.183 ms 00:45:21.067 [2024-11-20 14:06:18.196929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.067 [2024-11-20 14:06:18.196981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.067 [2024-11-20 14:06:18.196997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:21.067 [2024-11-20 14:06:18.197011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:45:21.067 [2024-11-20 14:06:18.197021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.067 [2024-11-20 14:06:18.197572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.067 [2024-11-20 14:06:18.197591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:21.067 [2024-11-20 14:06:18.197606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:45:21.067 [2024-11-20 14:06:18.197616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.067 [2024-11-20 14:06:18.197732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.067 [2024-11-20 14:06:18.197748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:21.067 [2024-11-20 14:06:18.197764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:45:21.067 [2024-11-20 14:06:18.197774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.067 [2024-11-20 14:06:18.218734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.067 [2024-11-20 14:06:18.218784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:21.067 [2024-11-20 14:06:18.218805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.936 ms 00:45:21.067 [2024-11-20 14:06:18.218817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.067 [2024-11-20 14:06:18.232743] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:45:21.067 [2024-11-20 14:06:18.238703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.067 [2024-11-20 14:06:18.238752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:21.067 [2024-11-20 14:06:18.238768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.762 ms 00:45:21.067 [2024-11-20 14:06:18.238781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.067 [2024-11-20 14:06:18.332841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.067 [2024-11-20 14:06:18.332923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:45:21.067 [2024-11-20 14:06:18.332943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.009 ms 00:45:21.067 [2024-11-20 14:06:18.332957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.067 [2024-11-20 14:06:18.333171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.067 [2024-11-20 14:06:18.333193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:21.067 [2024-11-20 14:06:18.333206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:45:21.067 [2024-11-20 14:06:18.333219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.067 [2024-11-20 14:06:18.374259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.067 [2024-11-20 14:06:18.374576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:45:21.067 [2024-11-20 14:06:18.374605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.956 ms 00:45:21.067 [2024-11-20 14:06:18.374619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.326 [2024-11-20 14:06:18.416537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.326 [2024-11-20 14:06:18.416832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:45:21.326 [2024-11-20 14:06:18.416860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.818 ms 00:45:21.326 [2024-11-20 14:06:18.416874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.326 [2024-11-20 14:06:18.417692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.326 [2024-11-20 14:06:18.417719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:21.326 [2024-11-20 14:06:18.417733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:45:21.326 [2024-11-20 14:06:18.417746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.326 [2024-11-20 14:06:18.531747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.326 [2024-11-20 14:06:18.532047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:45:21.326 [2024-11-20 14:06:18.532076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.913 ms 00:45:21.326 [2024-11-20 14:06:18.532092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.326 [2024-11-20 14:06:18.573317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.326 [2024-11-20 14:06:18.573399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:45:21.326 [2024-11-20 14:06:18.573421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.068 ms 00:45:21.326 [2024-11-20 14:06:18.573435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.326 [2024-11-20 14:06:18.613658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.326 [2024-11-20 14:06:18.613727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:45:21.326 [2024-11-20 14:06:18.613745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.126 ms 00:45:21.326 [2024-11-20 14:06:18.613758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.586 [2024-11-20 14:06:18.655911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.586 [2024-11-20 14:06:18.655983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:21.586 [2024-11-20 14:06:18.656001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.086 ms 00:45:21.586 [2024-11-20 14:06:18.656016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.586 [2024-11-20 14:06:18.656105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.586 [2024-11-20 14:06:18.656125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:21.586 [2024-11-20 14:06:18.656137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:45:21.586 [2024-11-20 14:06:18.656150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.586 [2024-11-20 14:06:18.656281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.586 [2024-11-20 14:06:18.656299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:21.586 [2024-11-20 14:06:18.656311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:45:21.586 [2024-11-20 14:06:18.656325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.586 [2024-11-20 14:06:18.657737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4003.395 ms, result 0 00:45:21.586 { 00:45:21.586 "name": "ftl0", 00:45:21.586 "uuid": "4fe9913f-2a9c-4f4e-8333-7be23b4ec047" 00:45:21.586 } 00:45:21.586 14:06:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:45:21.586 14:06:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:45:21.586 14:06:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:45:21.844 14:06:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:45:21.844 [2024-11-20 14:06:19.118014] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:45:21.845 I/O size of 69632 is greater than zero copy threshold (65536). 00:45:21.845 Zero copy mechanism will not be used. 00:45:21.845 Running I/O for 4 seconds... 00:45:24.160 1580.00 IOPS, 104.92 MiB/s [2024-11-20T14:06:22.420Z] 1704.00 IOPS, 113.16 MiB/s [2024-11-20T14:06:23.355Z] 1789.00 IOPS, 118.80 MiB/s [2024-11-20T14:06:23.355Z] 1834.00 IOPS, 121.79 MiB/s 00:45:26.032 Latency(us) 00:45:26.032 [2024-11-20T14:06:23.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:26.032 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:45:26.032 ftl0 : 4.00 1833.24 121.74 0.00 0.00 570.47 226.26 2309.36 00:45:26.032 [2024-11-20T14:06:23.355Z] =================================================================================================================== 00:45:26.032 [2024-11-20T14:06:23.355Z] Total : 1833.24 121.74 0.00 0.00 570.47 226.26 2309.36 00:45:26.032 [2024-11-20 14:06:23.130303] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:45:26.032 { 00:45:26.032 "results": [ 00:45:26.032 { 00:45:26.032 "job": "ftl0", 00:45:26.032 "core_mask": "0x1", 00:45:26.032 "workload": "randwrite", 00:45:26.032 "status": "finished", 00:45:26.032 "queue_depth": 1, 00:45:26.032 "io_size": 69632, 00:45:26.032 "runtime": 4.002203, 00:45:26.032 "iops": 1833.2403428811583, 00:45:26.032 "mibps": 121.73861651945192, 00:45:26.032 "io_failed": 0, 00:45:26.032 "io_timeout": 0, 00:45:26.032 "avg_latency_us": 570.4746089293017, 00:45:26.032 "min_latency_us": 226.2552380952381, 00:45:26.032 "max_latency_us": 2309.3638095238093 00:45:26.032 } 00:45:26.032 ], 00:45:26.032 "core_count": 1 00:45:26.032 } 00:45:26.032 14:06:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:45:26.032 [2024-11-20 14:06:23.301641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:45:26.032 Running I/O for 4 seconds... 00:45:28.349 10289.00 IOPS, 40.19 MiB/s [2024-11-20T14:06:26.608Z] 9890.50 IOPS, 38.63 MiB/s [2024-11-20T14:06:27.544Z] 9591.00 IOPS, 37.46 MiB/s [2024-11-20T14:06:27.544Z] 9670.25 IOPS, 37.77 MiB/s 00:45:30.221 Latency(us) 00:45:30.221 [2024-11-20T14:06:27.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:30.221 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:45:30.222 ftl0 : 4.02 9661.32 37.74 0.00 0.00 13219.14 269.17 31082.79 00:45:30.222 [2024-11-20T14:06:27.545Z] =================================================================================================================== 00:45:30.222 [2024-11-20T14:06:27.545Z] Total : 9661.32 37.74 0.00 0.00 13219.14 0.00 31082.79 00:45:30.222 [2024-11-20 14:06:27.330337] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:45:30.222 { 00:45:30.222 "results": [ 00:45:30.222 { 00:45:30.222 "job": "ftl0", 00:45:30.222 "core_mask": "0x1", 00:45:30.222 "workload": "randwrite", 00:45:30.222 "status": "finished", 00:45:30.222 "queue_depth": 128, 00:45:30.222 "io_size": 4096, 00:45:30.222 "runtime": 4.016945, 00:45:30.222 "iops": 9661.322223729725, 00:45:30.222 "mibps": 37.73953993644424, 00:45:30.222 "io_failed": 0, 00:45:30.222 "io_timeout": 0, 00:45:30.222 "avg_latency_us": 13219.140466632065, 00:45:30.222 "min_latency_us": 269.1657142857143, 00:45:30.222 "max_latency_us": 31082.788571428573 00:45:30.222 } 00:45:30.222 ], 00:45:30.222 "core_count": 1 00:45:30.222 } 00:45:30.222 14:06:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:45:30.222 [2024-11-20 14:06:27.478526] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:45:30.222 Running I/O for 4 seconds... 00:45:32.168 7679.00 IOPS, 30.00 MiB/s [2024-11-20T14:06:30.867Z] 7699.00 IOPS, 30.07 MiB/s [2024-11-20T14:06:31.892Z] 7752.00 IOPS, 30.28 MiB/s [2024-11-20T14:06:31.892Z] 7741.50 IOPS, 30.24 MiB/s 00:45:34.569 Latency(us) 00:45:34.569 [2024-11-20T14:06:31.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:34.569 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:45:34.569 Verification LBA range: start 0x0 length 0x1400000 00:45:34.569 ftl0 : 4.01 7750.83 30.28 0.00 0.00 16460.29 304.27 18974.23 00:45:34.569 [2024-11-20T14:06:31.892Z] =================================================================================================================== 00:45:34.569 [2024-11-20T14:06:31.892Z] Total : 7750.83 30.28 0.00 0.00 16460.29 0.00 18974.23 00:45:34.569 [2024-11-20 14:06:31.511280] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:45:34.569 { 00:45:34.569 "results": [ 00:45:34.569 { 00:45:34.569 "job": "ftl0", 00:45:34.569 "core_mask": "0x1", 00:45:34.569 "workload": "verify", 00:45:34.569 "status": "finished", 00:45:34.569 "verify_range": { 00:45:34.569 "start": 0, 00:45:34.569 "length": 20971520 00:45:34.569 }, 00:45:34.569 "queue_depth": 128, 00:45:34.569 "io_size": 4096, 00:45:34.569 "runtime": 4.011569, 00:45:34.569 "iops": 7750.832654255729, 00:45:34.569 "mibps": 30.27669005568644, 00:45:34.569 "io_failed": 0, 00:45:34.569 "io_timeout": 0, 00:45:34.569 "avg_latency_us": 16460.28574405815, 00:45:34.569 "min_latency_us": 304.2742857142857, 00:45:34.569 "max_latency_us": 18974.23238095238 00:45:34.569 } 00:45:34.569 ], 00:45:34.569 "core_count": 1 00:45:34.569 } 00:45:34.569 14:06:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:45:34.569 [2024-11-20 14:06:31.821188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.569 [2024-11-20 14:06:31.821268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:34.569 [2024-11-20 14:06:31.821286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:34.569 [2024-11-20 14:06:31.821300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.570 [2024-11-20 14:06:31.821326] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:34.570 [2024-11-20 14:06:31.825775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.570 [2024-11-20 14:06:31.825811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:34.570 [2024-11-20 14:06:31.825828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.424 ms 00:45:34.570 [2024-11-20 14:06:31.825839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.570 [2024-11-20 14:06:31.827772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.570 [2024-11-20 14:06:31.827827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:34.570 [2024-11-20 14:06:31.827846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.896 ms 00:45:34.570 [2024-11-20 14:06:31.827865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.829 [2024-11-20 14:06:32.012413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.829 [2024-11-20 14:06:32.012509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:34.829 [2024-11-20 14:06:32.012538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 184.506 ms 00:45:34.829 [2024-11-20 14:06:32.012570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.829 [2024-11-20 14:06:32.017934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.829 [2024-11-20 14:06:32.017980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:34.829 [2024-11-20 14:06:32.017998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.308 ms 00:45:34.829 [2024-11-20 14:06:32.018009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.829 [2024-11-20 14:06:32.058094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.829 [2024-11-20 14:06:32.058161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:34.829 [2024-11-20 14:06:32.058183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.990 ms 00:45:34.829 [2024-11-20 14:06:32.058193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.829 [2024-11-20 14:06:32.083234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.829 [2024-11-20 14:06:32.083304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:34.829 [2024-11-20 14:06:32.083325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.968 ms 00:45:34.829 [2024-11-20 14:06:32.083337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.829 [2024-11-20 14:06:32.083572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.829 [2024-11-20 14:06:32.083588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:34.829 [2024-11-20 14:06:32.083607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:45:34.829 [2024-11-20 14:06:32.083635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.829 [2024-11-20 14:06:32.123443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.829 [2024-11-20 14:06:32.123510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:34.829 [2024-11-20 14:06:32.123531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.779 ms 00:45:34.829 [2024-11-20 14:06:32.123541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.089 [2024-11-20 14:06:32.162601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.089 [2024-11-20 14:06:32.162875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:35.089 [2024-11-20 14:06:32.162907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.995 ms 00:45:35.089 [2024-11-20 14:06:32.162918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.089 [2024-11-20 14:06:32.202274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.089 [2024-11-20 14:06:32.202341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:35.089 [2024-11-20 14:06:32.202362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.292 ms 00:45:35.089 [2024-11-20 14:06:32.202373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.089 [2024-11-20 14:06:32.242379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.089 [2024-11-20 14:06:32.242434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:35.089 [2024-11-20 14:06:32.242458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.834 ms 00:45:35.089 [2024-11-20 14:06:32.242469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.089 [2024-11-20 14:06:32.242559] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:35.089 [2024-11-20 14:06:32.242603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:35.089 [2024-11-20 14:06:32.242625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:35.089 [2024-11-20 14:06:32.242637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:35.089 [2024-11-20 14:06:32.242651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:35.089 [2024-11-20 14:06:32.242662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:35.089 [2024-11-20 14:06:32.242677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:35.089 [2024-11-20 14:06:32.242688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:35.089 [2024-11-20 14:06:32.242702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:35.089 [2024-11-20 14:06:32.242713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:35.089 [2024-11-20 14:06:32.242727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:35.089 [2024-11-20 14:06:32.242738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:35.089 [2024-11-20 14:06:32.242753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:35.089 [2024-11-20 14:06:32.242764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.242998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:35.090 [2024-11-20 14:06:32.243775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.243985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:35.091 [2024-11-20 14:06:32.244006] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:35.091 [2024-11-20 14:06:32.244020] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4fe9913f-2a9c-4f4e-8333-7be23b4ec047 00:45:35.091 [2024-11-20 14:06:32.244032] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:35.091 [2024-11-20 14:06:32.244050] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:35.091 [2024-11-20 14:06:32.244067] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:35.091 [2024-11-20 14:06:32.244082] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:35.091 [2024-11-20 14:06:32.244092] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:35.091 [2024-11-20 14:06:32.244106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:35.091 [2024-11-20 14:06:32.244117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:35.091 [2024-11-20 14:06:32.244132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:35.091 [2024-11-20 14:06:32.244142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:35.091 [2024-11-20 14:06:32.244156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.091 [2024-11-20 14:06:32.244168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:35.091 [2024-11-20 14:06:32.244188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.600 ms 00:45:35.091 [2024-11-20 14:06:32.244199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.091 [2024-11-20 14:06:32.265438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.091 [2024-11-20 14:06:32.265512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:35.091 [2024-11-20 14:06:32.265532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.150 ms 00:45:35.091 [2024-11-20 14:06:32.265543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.091 [2024-11-20 14:06:32.266123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:35.091 [2024-11-20 14:06:32.266142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:35.091 [2024-11-20 14:06:32.266156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:45:35.091 [2024-11-20 14:06:32.266166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.091 [2024-11-20 14:06:32.323604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:35.091 [2024-11-20 14:06:32.323843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:35.091 [2024-11-20 14:06:32.323877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:35.091 [2024-11-20 14:06:32.323888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.091 [2024-11-20 14:06:32.323969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:35.091 [2024-11-20 14:06:32.323981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:35.091 [2024-11-20 14:06:32.323994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:35.091 [2024-11-20 14:06:32.324004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.091 [2024-11-20 14:06:32.324157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:35.091 [2024-11-20 14:06:32.324171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:35.091 [2024-11-20 14:06:32.324185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:35.091 [2024-11-20 14:06:32.324195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.091 [2024-11-20 14:06:32.324215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:35.091 [2024-11-20 14:06:32.324226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:35.091 [2024-11-20 14:06:32.324240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:35.091 [2024-11-20 14:06:32.324250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.357 [2024-11-20 14:06:32.456442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:35.357 [2024-11-20 14:06:32.456725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:35.357 [2024-11-20 14:06:32.456760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:35.358 [2024-11-20 14:06:32.456772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.358 [2024-11-20 14:06:32.565281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:35.358 [2024-11-20 14:06:32.565352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:35.358 [2024-11-20 14:06:32.565372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:35.358 [2024-11-20 14:06:32.565383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.358 [2024-11-20 14:06:32.565563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:35.358 [2024-11-20 14:06:32.565582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:35.358 [2024-11-20 14:06:32.565598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:35.358 [2024-11-20 14:06:32.565609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.358 [2024-11-20 14:06:32.565677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:35.358 [2024-11-20 14:06:32.565691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:35.358 [2024-11-20 14:06:32.565706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:35.358 [2024-11-20 14:06:32.565717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.358 [2024-11-20 14:06:32.565857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:35.358 [2024-11-20 14:06:32.565872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:35.358 [2024-11-20 14:06:32.565894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:35.358 [2024-11-20 14:06:32.565905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.358 [2024-11-20 14:06:32.565948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:35.358 [2024-11-20 14:06:32.565968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:35.358 [2024-11-20 14:06:32.565983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:35.358 [2024-11-20 14:06:32.565994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.358 [2024-11-20 14:06:32.566038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:35.358 [2024-11-20 14:06:32.566049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:35.358 [2024-11-20 14:06:32.566067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:35.358 [2024-11-20 14:06:32.566078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.358 [2024-11-20 14:06:32.566139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:35.358 [2024-11-20 14:06:32.566164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:35.358 [2024-11-20 14:06:32.566180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:35.358 [2024-11-20 14:06:32.566190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:35.358 [2024-11-20 14:06:32.566329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 745.089 ms, result 0 00:45:35.358 true 00:45:35.358 14:06:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78352 00:45:35.358 14:06:32 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78352 ']' 00:45:35.358 14:06:32 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78352 00:45:35.358 14:06:32 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:45:35.358 14:06:32 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:35.358 14:06:32 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78352 00:45:35.358 killing process with pid 78352 00:45:35.358 Received shutdown signal, test time was about 4.000000 seconds 00:45:35.358 00:45:35.358 Latency(us) 00:45:35.358 [2024-11-20T14:06:32.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:35.358 [2024-11-20T14:06:32.681Z] =================================================================================================================== 00:45:35.358 [2024-11-20T14:06:32.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:35.358 14:06:32 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:35.358 14:06:32 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:35.358 14:06:32 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78352' 00:45:35.358 14:06:32 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78352 00:45:35.358 14:06:32 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78352 00:45:40.632 Remove shared memory files 00:45:40.632 14:06:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:45:40.632 14:06:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:45:40.632 14:06:37 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:45:40.632 14:06:37 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:45:40.632 14:06:37 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:45:40.632 14:06:37 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:45:40.632 14:06:37 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:45:40.632 14:06:37 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:45:40.632 ************************************ 00:45:40.632 END TEST ftl_bdevperf 00:45:40.632 ************************************ 00:45:40.632 00:45:40.632 real 0m27.764s 00:45:40.632 user 0m31.332s 00:45:40.632 sys 0m1.540s 00:45:40.632 14:06:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:40.632 14:06:37 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:45:40.632 14:06:37 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:45:40.632 14:06:37 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:45:40.632 14:06:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:40.632 14:06:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:45:40.632 ************************************ 00:45:40.632 START TEST ftl_trim 00:45:40.632 ************************************ 00:45:40.632 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:45:40.632 * Looking for test storage... 00:45:40.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:45:40.632 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:40.632 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:40.632 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:45:40.632 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:40.632 14:06:37 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:45:40.632 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:40.632 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:40.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:40.632 --rc genhtml_branch_coverage=1 00:45:40.632 --rc genhtml_function_coverage=1 00:45:40.632 --rc genhtml_legend=1 00:45:40.632 --rc geninfo_all_blocks=1 00:45:40.632 --rc geninfo_unexecuted_blocks=1 00:45:40.632 00:45:40.632 ' 00:45:40.632 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:40.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:40.632 --rc genhtml_branch_coverage=1 00:45:40.632 --rc genhtml_function_coverage=1 00:45:40.632 --rc genhtml_legend=1 00:45:40.632 --rc geninfo_all_blocks=1 00:45:40.632 --rc geninfo_unexecuted_blocks=1 00:45:40.632 00:45:40.632 ' 00:45:40.632 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:40.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:40.632 --rc genhtml_branch_coverage=1 00:45:40.632 --rc genhtml_function_coverage=1 00:45:40.632 --rc genhtml_legend=1 00:45:40.632 --rc geninfo_all_blocks=1 00:45:40.632 --rc geninfo_unexecuted_blocks=1 00:45:40.632 00:45:40.632 ' 00:45:40.632 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:40.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:40.632 --rc genhtml_branch_coverage=1 00:45:40.632 --rc genhtml_function_coverage=1 00:45:40.632 --rc genhtml_legend=1 00:45:40.632 --rc geninfo_all_blocks=1 00:45:40.632 --rc geninfo_unexecuted_blocks=1 00:45:40.632 00:45:40.632 ' 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:45:40.632 14:06:37 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78727 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78727 00:45:40.633 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78727 ']' 00:45:40.633 14:06:37 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:45:40.633 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:40.633 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:40.633 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:40.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:40.633 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:40.633 14:06:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:45:40.633 [2024-11-20 14:06:37.615976] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:45:40.633 [2024-11-20 14:06:37.616191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78727 ] 00:45:40.633 [2024-11-20 14:06:37.821001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:45:40.891 [2024-11-20 14:06:37.987385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:40.891 [2024-11-20 14:06:37.987554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:40.891 [2024-11-20 14:06:37.987595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:41.825 14:06:39 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:41.825 14:06:39 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:45:41.825 14:06:39 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:45:41.825 14:06:39 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:45:41.825 14:06:39 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:45:41.826 14:06:39 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:45:41.826 14:06:39 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:45:41.826 14:06:39 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:45:42.391 14:06:39 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:45:42.391 14:06:39 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:45:42.391 14:06:39 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:45:42.391 14:06:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:45:42.391 14:06:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:42.392 14:06:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:45:42.392 14:06:39 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:45:42.392 14:06:39 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:45:42.392 14:06:39 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:42.392 { 00:45:42.392 "name": "nvme0n1", 00:45:42.392 "aliases": [ 00:45:42.392 "d2628404-42f1-4c3d-8733-d85e8a37f387" 00:45:42.392 ], 00:45:42.392 "product_name": "NVMe disk", 00:45:42.392 "block_size": 4096, 00:45:42.392 "num_blocks": 1310720, 00:45:42.392 "uuid": "d2628404-42f1-4c3d-8733-d85e8a37f387", 00:45:42.392 "numa_id": -1, 00:45:42.392 "assigned_rate_limits": { 00:45:42.392 "rw_ios_per_sec": 0, 00:45:42.392 "rw_mbytes_per_sec": 0, 00:45:42.392 "r_mbytes_per_sec": 0, 00:45:42.392 "w_mbytes_per_sec": 0 00:45:42.392 }, 00:45:42.392 "claimed": true, 00:45:42.392 "claim_type": "read_many_write_one", 00:45:42.392 "zoned": false, 00:45:42.392 "supported_io_types": { 00:45:42.392 "read": true, 00:45:42.392 "write": true, 00:45:42.392 "unmap": true, 00:45:42.392 "flush": true, 00:45:42.392 "reset": true, 00:45:42.392 "nvme_admin": true, 00:45:42.392 "nvme_io": true, 00:45:42.392 "nvme_io_md": false, 00:45:42.392 "write_zeroes": true, 00:45:42.392 "zcopy": false, 00:45:42.392 "get_zone_info": false, 00:45:42.392 "zone_management": false, 00:45:42.392 "zone_append": false, 00:45:42.392 "compare": true, 00:45:42.392 "compare_and_write": false, 00:45:42.392 "abort": true, 00:45:42.392 "seek_hole": false, 00:45:42.392 "seek_data": false, 00:45:42.392 "copy": true, 00:45:42.392 "nvme_iov_md": false 00:45:42.392 }, 00:45:42.392 "driver_specific": { 00:45:42.392 "nvme": [ 00:45:42.392 { 00:45:42.392 "pci_address": "0000:00:11.0", 00:45:42.392 "trid": { 00:45:42.392 "trtype": "PCIe", 00:45:42.392 "traddr": "0000:00:11.0" 00:45:42.392 }, 00:45:42.392 "ctrlr_data": { 00:45:42.392 "cntlid": 0, 00:45:42.392 "vendor_id": "0x1b36", 00:45:42.392 "model_number": "QEMU NVMe Ctrl", 00:45:42.392 "serial_number": "12341", 00:45:42.392 "firmware_revision": "8.0.0", 00:45:42.392 "subnqn": "nqn.2019-08.org.qemu:12341", 00:45:42.392 "oacs": { 00:45:42.392 "security": 0, 00:45:42.392 "format": 1, 00:45:42.392 "firmware": 0, 00:45:42.392 "ns_manage": 1 00:45:42.392 }, 00:45:42.392 "multi_ctrlr": false, 00:45:42.392 "ana_reporting": false 00:45:42.392 }, 00:45:42.392 "vs": { 00:45:42.392 "nvme_version": "1.4" 00:45:42.392 }, 00:45:42.392 "ns_data": { 00:45:42.392 "id": 1, 00:45:42.392 "can_share": false 00:45:42.392 } 00:45:42.392 } 00:45:42.392 ], 00:45:42.392 "mp_policy": "active_passive" 00:45:42.392 } 00:45:42.392 } 00:45:42.392 ]' 00:45:42.392 14:06:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:42.650 14:06:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:45:42.650 14:06:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:42.650 14:06:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:45:42.650 14:06:39 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:45:42.650 14:06:39 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:45:42.650 14:06:39 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:45:42.650 14:06:39 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:45:42.650 14:06:39 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:45:42.650 14:06:39 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:45:42.650 14:06:39 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:45:42.908 14:06:39 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f1d96004-2d93-4226-b5bb-419af22fc71f 00:45:42.908 14:06:39 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:45:42.908 14:06:39 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f1d96004-2d93-4226-b5bb-419af22fc71f 00:45:42.908 14:06:40 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:45:43.166 14:06:40 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=92f2641f-279d-4a37-8ee1-a05f39c3e4c9 00:45:43.166 14:06:40 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 92f2641f-279d-4a37-8ee1-a05f39c3e4c9 00:45:43.424 14:06:40 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=5dfb4179-26b1-4d56-8ec6-371986e48bc6 00:45:43.424 14:06:40 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5dfb4179-26b1-4d56-8ec6-371986e48bc6 00:45:43.424 14:06:40 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:45:43.424 14:06:40 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:45:43.424 14:06:40 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=5dfb4179-26b1-4d56-8ec6-371986e48bc6 00:45:43.424 14:06:40 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:45:43.424 14:06:40 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 5dfb4179-26b1-4d56-8ec6-371986e48bc6 00:45:43.424 14:06:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5dfb4179-26b1-4d56-8ec6-371986e48bc6 00:45:43.424 14:06:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:43.424 14:06:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:45:43.424 14:06:40 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:45:43.424 14:06:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5dfb4179-26b1-4d56-8ec6-371986e48bc6 00:45:43.750 14:06:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:43.750 { 00:45:43.750 "name": "5dfb4179-26b1-4d56-8ec6-371986e48bc6", 00:45:43.750 "aliases": [ 00:45:43.750 "lvs/nvme0n1p0" 00:45:43.750 ], 00:45:43.750 "product_name": "Logical Volume", 00:45:43.750 "block_size": 4096, 00:45:43.750 "num_blocks": 26476544, 00:45:43.750 "uuid": "5dfb4179-26b1-4d56-8ec6-371986e48bc6", 00:45:43.750 "assigned_rate_limits": { 00:45:43.750 "rw_ios_per_sec": 0, 00:45:43.750 "rw_mbytes_per_sec": 0, 00:45:43.750 "r_mbytes_per_sec": 0, 00:45:43.750 "w_mbytes_per_sec": 0 00:45:43.750 }, 00:45:43.750 "claimed": false, 00:45:43.750 "zoned": false, 00:45:43.750 "supported_io_types": { 00:45:43.750 "read": true, 00:45:43.750 "write": true, 00:45:43.750 "unmap": true, 00:45:43.750 "flush": false, 00:45:43.750 "reset": true, 00:45:43.750 "nvme_admin": false, 00:45:43.750 "nvme_io": false, 00:45:43.750 "nvme_io_md": false, 00:45:43.750 "write_zeroes": true, 00:45:43.750 "zcopy": false, 00:45:43.750 "get_zone_info": false, 00:45:43.750 "zone_management": false, 00:45:43.750 "zone_append": false, 00:45:43.750 "compare": false, 00:45:43.750 "compare_and_write": false, 00:45:43.750 "abort": false, 00:45:43.750 "seek_hole": true, 00:45:43.750 "seek_data": true, 00:45:43.750 "copy": false, 00:45:43.750 "nvme_iov_md": false 00:45:43.750 }, 00:45:43.750 "driver_specific": { 00:45:43.750 "lvol": { 00:45:43.750 "lvol_store_uuid": "92f2641f-279d-4a37-8ee1-a05f39c3e4c9", 00:45:43.750 "base_bdev": "nvme0n1", 00:45:43.750 "thin_provision": true, 00:45:43.750 "num_allocated_clusters": 0, 00:45:43.750 "snapshot": false, 00:45:43.750 "clone": false, 00:45:43.750 "esnap_clone": false 00:45:43.750 } 00:45:43.750 } 00:45:43.750 } 00:45:43.750 ]' 00:45:43.750 14:06:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:43.750 14:06:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:45:43.750 14:06:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:43.750 14:06:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:45:43.750 14:06:40 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:45:43.750 14:06:40 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:45:43.750 14:06:40 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:45:43.750 14:06:40 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:45:43.750 14:06:40 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:45:44.011 14:06:41 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:45:44.011 14:06:41 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:45:44.011 14:06:41 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 5dfb4179-26b1-4d56-8ec6-371986e48bc6 00:45:44.011 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5dfb4179-26b1-4d56-8ec6-371986e48bc6 00:45:44.011 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:44.011 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:45:44.011 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:45:44.011 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5dfb4179-26b1-4d56-8ec6-371986e48bc6 00:45:44.268 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:44.268 { 00:45:44.268 "name": "5dfb4179-26b1-4d56-8ec6-371986e48bc6", 00:45:44.268 "aliases": [ 00:45:44.268 "lvs/nvme0n1p0" 00:45:44.268 ], 00:45:44.268 "product_name": "Logical Volume", 00:45:44.268 "block_size": 4096, 00:45:44.268 "num_blocks": 26476544, 00:45:44.268 "uuid": "5dfb4179-26b1-4d56-8ec6-371986e48bc6", 00:45:44.268 "assigned_rate_limits": { 00:45:44.268 "rw_ios_per_sec": 0, 00:45:44.268 "rw_mbytes_per_sec": 0, 00:45:44.268 "r_mbytes_per_sec": 0, 00:45:44.268 "w_mbytes_per_sec": 0 00:45:44.268 }, 00:45:44.268 "claimed": false, 00:45:44.268 "zoned": false, 00:45:44.268 "supported_io_types": { 00:45:44.268 "read": true, 00:45:44.268 "write": true, 00:45:44.268 "unmap": true, 00:45:44.268 "flush": false, 00:45:44.268 "reset": true, 00:45:44.268 "nvme_admin": false, 00:45:44.268 "nvme_io": false, 00:45:44.268 "nvme_io_md": false, 00:45:44.268 "write_zeroes": true, 00:45:44.268 "zcopy": false, 00:45:44.268 "get_zone_info": false, 00:45:44.268 "zone_management": false, 00:45:44.268 "zone_append": false, 00:45:44.268 "compare": false, 00:45:44.268 "compare_and_write": false, 00:45:44.268 "abort": false, 00:45:44.268 "seek_hole": true, 00:45:44.268 "seek_data": true, 00:45:44.268 "copy": false, 00:45:44.268 "nvme_iov_md": false 00:45:44.268 }, 00:45:44.268 "driver_specific": { 00:45:44.268 "lvol": { 00:45:44.268 "lvol_store_uuid": "92f2641f-279d-4a37-8ee1-a05f39c3e4c9", 00:45:44.268 "base_bdev": "nvme0n1", 00:45:44.268 "thin_provision": true, 00:45:44.268 "num_allocated_clusters": 0, 00:45:44.268 "snapshot": false, 00:45:44.268 "clone": false, 00:45:44.268 "esnap_clone": false 00:45:44.268 } 00:45:44.268 } 00:45:44.268 } 00:45:44.268 ]' 00:45:44.268 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:44.268 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:45:44.268 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:44.525 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:45:44.525 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:45:44.525 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:45:44.525 14:06:41 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:45:44.525 14:06:41 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:45:44.525 14:06:41 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:45:44.526 14:06:41 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:45:44.526 14:06:41 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 5dfb4179-26b1-4d56-8ec6-371986e48bc6 00:45:44.526 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5dfb4179-26b1-4d56-8ec6-371986e48bc6 00:45:44.526 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:44.526 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:45:44.526 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:45:44.526 14:06:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5dfb4179-26b1-4d56-8ec6-371986e48bc6 00:45:44.783 14:06:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:44.783 { 00:45:44.783 "name": "5dfb4179-26b1-4d56-8ec6-371986e48bc6", 00:45:44.783 "aliases": [ 00:45:44.783 "lvs/nvme0n1p0" 00:45:44.783 ], 00:45:44.783 "product_name": "Logical Volume", 00:45:44.783 "block_size": 4096, 00:45:44.783 "num_blocks": 26476544, 00:45:44.783 "uuid": "5dfb4179-26b1-4d56-8ec6-371986e48bc6", 00:45:44.783 "assigned_rate_limits": { 00:45:44.783 "rw_ios_per_sec": 0, 00:45:44.783 "rw_mbytes_per_sec": 0, 00:45:44.783 "r_mbytes_per_sec": 0, 00:45:44.783 "w_mbytes_per_sec": 0 00:45:44.783 }, 00:45:44.783 "claimed": false, 00:45:44.783 "zoned": false, 00:45:44.783 "supported_io_types": { 00:45:44.783 "read": true, 00:45:44.783 "write": true, 00:45:44.783 "unmap": true, 00:45:44.783 "flush": false, 00:45:44.783 "reset": true, 00:45:44.784 "nvme_admin": false, 00:45:44.784 "nvme_io": false, 00:45:44.784 "nvme_io_md": false, 00:45:44.784 "write_zeroes": true, 00:45:44.784 "zcopy": false, 00:45:44.784 "get_zone_info": false, 00:45:44.784 "zone_management": false, 00:45:44.784 "zone_append": false, 00:45:44.784 "compare": false, 00:45:44.784 "compare_and_write": false, 00:45:44.784 "abort": false, 00:45:44.784 "seek_hole": true, 00:45:44.784 "seek_data": true, 00:45:44.784 "copy": false, 00:45:44.784 "nvme_iov_md": false 00:45:44.784 }, 00:45:44.784 "driver_specific": { 00:45:44.784 "lvol": { 00:45:44.784 "lvol_store_uuid": "92f2641f-279d-4a37-8ee1-a05f39c3e4c9", 00:45:44.784 "base_bdev": "nvme0n1", 00:45:44.784 "thin_provision": true, 00:45:44.784 "num_allocated_clusters": 0, 00:45:44.784 "snapshot": false, 00:45:44.784 "clone": false, 00:45:44.784 "esnap_clone": false 00:45:44.784 } 00:45:44.784 } 00:45:44.784 } 00:45:44.784 ]' 00:45:44.784 14:06:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:44.784 14:06:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:45:44.784 14:06:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:45.042 14:06:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:45:45.042 14:06:42 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:45:45.042 14:06:42 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:45:45.042 14:06:42 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:45:45.042 14:06:42 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5dfb4179-26b1-4d56-8ec6-371986e48bc6 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:45:45.301 [2024-11-20 14:06:42.395903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.301 [2024-11-20 14:06:42.395972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:45.301 [2024-11-20 14:06:42.395997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:45:45.301 [2024-11-20 14:06:42.396010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.301 [2024-11-20 14:06:42.399412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.301 [2024-11-20 14:06:42.399616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:45.301 [2024-11-20 14:06:42.399647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.363 ms 00:45:45.301 [2024-11-20 14:06:42.399660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.301 [2024-11-20 14:06:42.399874] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:45.301 [2024-11-20 14:06:42.400852] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:45.301 [2024-11-20 14:06:42.400895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.301 [2024-11-20 14:06:42.400909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:45.301 [2024-11-20 14:06:42.400925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:45:45.301 [2024-11-20 14:06:42.400938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.301 [2024-11-20 14:06:42.401112] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d1f0a796-101e-4f30-a82e-71441876703e 00:45:45.301 [2024-11-20 14:06:42.402609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.301 [2024-11-20 14:06:42.402768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:45:45.301 [2024-11-20 14:06:42.402792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:45:45.301 [2024-11-20 14:06:42.402808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.301 [2024-11-20 14:06:42.410537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.301 [2024-11-20 14:06:42.410592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:45.301 [2024-11-20 14:06:42.410611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.641 ms 00:45:45.301 [2024-11-20 14:06:42.410629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.301 [2024-11-20 14:06:42.410842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.301 [2024-11-20 14:06:42.410866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:45.301 [2024-11-20 14:06:42.410881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:45:45.301 [2024-11-20 14:06:42.410904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.302 [2024-11-20 14:06:42.410953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.302 [2024-11-20 14:06:42.410971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:45.302 [2024-11-20 14:06:42.410984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:45:45.302 [2024-11-20 14:06:42.411004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.302 [2024-11-20 14:06:42.411047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:45:45.302 [2024-11-20 14:06:42.416370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.302 [2024-11-20 14:06:42.416412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:45.302 [2024-11-20 14:06:42.416432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.327 ms 00:45:45.302 [2024-11-20 14:06:42.416445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.302 [2024-11-20 14:06:42.416542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.302 [2024-11-20 14:06:42.416558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:45.302 [2024-11-20 14:06:42.416574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:45:45.302 [2024-11-20 14:06:42.416605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.302 [2024-11-20 14:06:42.416646] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:45:45.302 [2024-11-20 14:06:42.416785] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:45.302 [2024-11-20 14:06:42.416809] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:45.302 [2024-11-20 14:06:42.416825] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:45.302 [2024-11-20 14:06:42.416843] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:45.302 [2024-11-20 14:06:42.416859] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:45.302 [2024-11-20 14:06:42.416875] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:45:45.302 [2024-11-20 14:06:42.416888] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:45.302 [2024-11-20 14:06:42.416903] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:45.302 [2024-11-20 14:06:42.416917] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:45.302 [2024-11-20 14:06:42.416932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.302 [2024-11-20 14:06:42.416944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:45.302 [2024-11-20 14:06:42.416960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:45:45.302 [2024-11-20 14:06:42.416972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.302 [2024-11-20 14:06:42.417075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.302 [2024-11-20 14:06:42.417088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:45.302 [2024-11-20 14:06:42.417104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:45:45.302 [2024-11-20 14:06:42.417116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.302 [2024-11-20 14:06:42.417240] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:45.302 [2024-11-20 14:06:42.417261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:45.302 [2024-11-20 14:06:42.417276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:45.302 [2024-11-20 14:06:42.417289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:45.302 [2024-11-20 14:06:42.417304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:45.302 [2024-11-20 14:06:42.417316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:45.302 [2024-11-20 14:06:42.417331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:45:45.302 [2024-11-20 14:06:42.417343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:45.302 [2024-11-20 14:06:42.417357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:45:45.302 [2024-11-20 14:06:42.417368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:45.302 [2024-11-20 14:06:42.417383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:45.302 [2024-11-20 14:06:42.417394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:45:45.302 [2024-11-20 14:06:42.417409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:45.302 [2024-11-20 14:06:42.417421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:45.302 [2024-11-20 14:06:42.417435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:45:45.302 [2024-11-20 14:06:42.417447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:45.302 [2024-11-20 14:06:42.417464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:45.302 [2024-11-20 14:06:42.417475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:45:45.302 [2024-11-20 14:06:42.417501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:45.302 [2024-11-20 14:06:42.417513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:45.302 [2024-11-20 14:06:42.417529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:45:45.302 [2024-11-20 14:06:42.417541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:45.302 [2024-11-20 14:06:42.417555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:45.302 [2024-11-20 14:06:42.417566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:45:45.302 [2024-11-20 14:06:42.417580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:45.302 [2024-11-20 14:06:42.417592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:45.302 [2024-11-20 14:06:42.417606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:45:45.302 [2024-11-20 14:06:42.417618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:45.302 [2024-11-20 14:06:42.417632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:45.302 [2024-11-20 14:06:42.417643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:45:45.302 [2024-11-20 14:06:42.417657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:45.302 [2024-11-20 14:06:42.417669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:45.302 [2024-11-20 14:06:42.417686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:45:45.302 [2024-11-20 14:06:42.417697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:45.302 [2024-11-20 14:06:42.417712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:45.302 [2024-11-20 14:06:42.417723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:45:45.302 [2024-11-20 14:06:42.417737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:45.302 [2024-11-20 14:06:42.417749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:45.302 [2024-11-20 14:06:42.417763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:45:45.302 [2024-11-20 14:06:42.417775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:45.302 [2024-11-20 14:06:42.417789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:45.302 [2024-11-20 14:06:42.417801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:45:45.302 [2024-11-20 14:06:42.417815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:45.302 [2024-11-20 14:06:42.417826] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:45.302 [2024-11-20 14:06:42.417842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:45.302 [2024-11-20 14:06:42.417855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:45.302 [2024-11-20 14:06:42.417869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:45.302 [2024-11-20 14:06:42.417882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:45.302 [2024-11-20 14:06:42.417901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:45.302 [2024-11-20 14:06:42.417912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:45.302 [2024-11-20 14:06:42.417927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:45.302 [2024-11-20 14:06:42.417938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:45.302 [2024-11-20 14:06:42.417953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:45.302 [2024-11-20 14:06:42.417970] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:45.302 [2024-11-20 14:06:42.417987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:45.302 [2024-11-20 14:06:42.418004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:45:45.302 [2024-11-20 14:06:42.418020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:45:45.302 [2024-11-20 14:06:42.418033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:45:45.302 [2024-11-20 14:06:42.418048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:45:45.302 [2024-11-20 14:06:42.418061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:45:45.302 [2024-11-20 14:06:42.418076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:45:45.302 [2024-11-20 14:06:42.418089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:45:45.302 [2024-11-20 14:06:42.418105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:45:45.302 [2024-11-20 14:06:42.418118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:45:45.302 [2024-11-20 14:06:42.418136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:45:45.302 [2024-11-20 14:06:42.418148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:45:45.302 [2024-11-20 14:06:42.418164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:45:45.303 [2024-11-20 14:06:42.418177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:45:45.303 [2024-11-20 14:06:42.418193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:45:45.303 [2024-11-20 14:06:42.418205] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:45.303 [2024-11-20 14:06:42.418228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:45.303 [2024-11-20 14:06:42.418241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:45.303 [2024-11-20 14:06:42.418258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:45.303 [2024-11-20 14:06:42.418270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:45.303 [2024-11-20 14:06:42.418286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:45.303 [2024-11-20 14:06:42.418300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.303 [2024-11-20 14:06:42.418316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:45.303 [2024-11-20 14:06:42.418329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:45:45.303 [2024-11-20 14:06:42.418344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.303 [2024-11-20 14:06:42.418428] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:45:45.303 [2024-11-20 14:06:42.418449] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:45:48.589 [2024-11-20 14:06:45.631998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.589 [2024-11-20 14:06:45.632265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:45:48.589 [2024-11-20 14:06:45.632437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3213.549 ms 00:45:48.589 [2024-11-20 14:06:45.632521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.589 [2024-11-20 14:06:45.677152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.589 [2024-11-20 14:06:45.677446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:48.589 [2024-11-20 14:06:45.677585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.109 ms 00:45:48.589 [2024-11-20 14:06:45.677641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.589 [2024-11-20 14:06:45.677919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.589 [2024-11-20 14:06:45.678087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:48.589 [2024-11-20 14:06:45.678183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:45:48.589 [2024-11-20 14:06:45.678213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.589 [2024-11-20 14:06:45.738713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.589 [2024-11-20 14:06:45.738776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:48.589 [2024-11-20 14:06:45.738812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.419 ms 00:45:48.589 [2024-11-20 14:06:45.738832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.589 [2024-11-20 14:06:45.738991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.589 [2024-11-20 14:06:45.739012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:48.589 [2024-11-20 14:06:45.739027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:48.589 [2024-11-20 14:06:45.739044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.589 [2024-11-20 14:06:45.739551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.589 [2024-11-20 14:06:45.739597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:48.589 [2024-11-20 14:06:45.739628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:45:48.589 [2024-11-20 14:06:45.739645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.589 [2024-11-20 14:06:45.739789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.589 [2024-11-20 14:06:45.739808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:48.589 [2024-11-20 14:06:45.739822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:45:48.589 [2024-11-20 14:06:45.739841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.589 [2024-11-20 14:06:45.762804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.589 [2024-11-20 14:06:45.763040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:48.589 [2024-11-20 14:06:45.763068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.900 ms 00:45:48.589 [2024-11-20 14:06:45.763085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.589 [2024-11-20 14:06:45.777472] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:45:48.589 [2024-11-20 14:06:45.795365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.589 [2024-11-20 14:06:45.795430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:48.589 [2024-11-20 14:06:45.795452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.103 ms 00:45:48.589 [2024-11-20 14:06:45.795465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.589 [2024-11-20 14:06:45.889647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.590 [2024-11-20 14:06:45.889726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:45:48.590 [2024-11-20 14:06:45.889750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.001 ms 00:45:48.590 [2024-11-20 14:06:45.889763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.590 [2024-11-20 14:06:45.890039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.590 [2024-11-20 14:06:45.890056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:48.590 [2024-11-20 14:06:45.890076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:45:48.590 [2024-11-20 14:06:45.890089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.849 [2024-11-20 14:06:45.929585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.849 [2024-11-20 14:06:45.929656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:45:48.849 [2024-11-20 14:06:45.929680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.439 ms 00:45:48.849 [2024-11-20 14:06:45.929694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.849 [2024-11-20 14:06:45.969202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.849 [2024-11-20 14:06:45.969271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:45:48.849 [2024-11-20 14:06:45.969296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.362 ms 00:45:48.849 [2024-11-20 14:06:45.969309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.849 [2024-11-20 14:06:45.970196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.849 [2024-11-20 14:06:45.970225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:48.849 [2024-11-20 14:06:45.970245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:45:48.849 [2024-11-20 14:06:45.970259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.849 [2024-11-20 14:06:46.087234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.849 [2024-11-20 14:06:46.087301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:45:48.849 [2024-11-20 14:06:46.087328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.921 ms 00:45:48.849 [2024-11-20 14:06:46.087342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:48.849 [2024-11-20 14:06:46.132694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:48.849 [2024-11-20 14:06:46.132774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:45:48.849 [2024-11-20 14:06:46.132802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.162 ms 00:45:48.849 [2024-11-20 14:06:46.132818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:49.108 [2024-11-20 14:06:46.175963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:49.108 [2024-11-20 14:06:46.176231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:45:49.108 [2024-11-20 14:06:46.176267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.967 ms 00:45:49.108 [2024-11-20 14:06:46.176281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:49.108 [2024-11-20 14:06:46.216285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:49.108 [2024-11-20 14:06:46.216371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:49.108 [2024-11-20 14:06:46.216396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.832 ms 00:45:49.108 [2024-11-20 14:06:46.216428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:49.108 [2024-11-20 14:06:46.216577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:49.108 [2024-11-20 14:06:46.216597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:49.108 [2024-11-20 14:06:46.216617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:45:49.108 [2024-11-20 14:06:46.216646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:49.108 [2024-11-20 14:06:46.216751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:49.108 [2024-11-20 14:06:46.216766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:49.108 [2024-11-20 14:06:46.216783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:45:49.108 [2024-11-20 14:06:46.216802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:49.108 [2024-11-20 14:06:46.218105] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:49.108 [2024-11-20 14:06:46.223176] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3821.830 ms, result 0 00:45:49.108 [2024-11-20 14:06:46.224179] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:49.108 { 00:45:49.108 "name": "ftl0", 00:45:49.108 "uuid": "d1f0a796-101e-4f30-a82e-71441876703e" 00:45:49.109 } 00:45:49.109 14:06:46 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:45:49.109 14:06:46 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:45:49.109 14:06:46 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:45:49.109 14:06:46 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:45:49.109 14:06:46 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:45:49.109 14:06:46 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:45:49.109 14:06:46 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:45:49.367 14:06:46 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:45:49.625 [ 00:45:49.625 { 00:45:49.625 "name": "ftl0", 00:45:49.626 "aliases": [ 00:45:49.626 "d1f0a796-101e-4f30-a82e-71441876703e" 00:45:49.626 ], 00:45:49.626 "product_name": "FTL disk", 00:45:49.626 "block_size": 4096, 00:45:49.626 "num_blocks": 23592960, 00:45:49.626 "uuid": "d1f0a796-101e-4f30-a82e-71441876703e", 00:45:49.626 "assigned_rate_limits": { 00:45:49.626 "rw_ios_per_sec": 0, 00:45:49.626 "rw_mbytes_per_sec": 0, 00:45:49.626 "r_mbytes_per_sec": 0, 00:45:49.626 "w_mbytes_per_sec": 0 00:45:49.626 }, 00:45:49.626 "claimed": false, 00:45:49.626 "zoned": false, 00:45:49.626 "supported_io_types": { 00:45:49.626 "read": true, 00:45:49.626 "write": true, 00:45:49.626 "unmap": true, 00:45:49.626 "flush": true, 00:45:49.626 "reset": false, 00:45:49.626 "nvme_admin": false, 00:45:49.626 "nvme_io": false, 00:45:49.626 "nvme_io_md": false, 00:45:49.626 "write_zeroes": true, 00:45:49.626 "zcopy": false, 00:45:49.626 "get_zone_info": false, 00:45:49.626 "zone_management": false, 00:45:49.626 "zone_append": false, 00:45:49.626 "compare": false, 00:45:49.626 "compare_and_write": false, 00:45:49.626 "abort": false, 00:45:49.626 "seek_hole": false, 00:45:49.626 "seek_data": false, 00:45:49.626 "copy": false, 00:45:49.626 "nvme_iov_md": false 00:45:49.626 }, 00:45:49.626 "driver_specific": { 00:45:49.626 "ftl": { 00:45:49.626 "base_bdev": "5dfb4179-26b1-4d56-8ec6-371986e48bc6", 00:45:49.626 "cache": "nvc0n1p0" 00:45:49.626 } 00:45:49.626 } 00:45:49.626 } 00:45:49.626 ] 00:45:49.626 14:06:46 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:45:49.626 14:06:46 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:45:49.626 14:06:46 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:45:49.885 14:06:47 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:45:49.885 14:06:47 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:45:50.453 14:06:47 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:45:50.453 { 00:45:50.453 "name": "ftl0", 00:45:50.453 "aliases": [ 00:45:50.453 "d1f0a796-101e-4f30-a82e-71441876703e" 00:45:50.453 ], 00:45:50.453 "product_name": "FTL disk", 00:45:50.453 "block_size": 4096, 00:45:50.453 "num_blocks": 23592960, 00:45:50.453 "uuid": "d1f0a796-101e-4f30-a82e-71441876703e", 00:45:50.453 "assigned_rate_limits": { 00:45:50.453 "rw_ios_per_sec": 0, 00:45:50.453 "rw_mbytes_per_sec": 0, 00:45:50.453 "r_mbytes_per_sec": 0, 00:45:50.453 "w_mbytes_per_sec": 0 00:45:50.453 }, 00:45:50.453 "claimed": false, 00:45:50.453 "zoned": false, 00:45:50.453 "supported_io_types": { 00:45:50.453 "read": true, 00:45:50.453 "write": true, 00:45:50.453 "unmap": true, 00:45:50.453 "flush": true, 00:45:50.453 "reset": false, 00:45:50.453 "nvme_admin": false, 00:45:50.453 "nvme_io": false, 00:45:50.453 "nvme_io_md": false, 00:45:50.453 "write_zeroes": true, 00:45:50.453 "zcopy": false, 00:45:50.453 "get_zone_info": false, 00:45:50.453 "zone_management": false, 00:45:50.453 "zone_append": false, 00:45:50.453 "compare": false, 00:45:50.453 "compare_and_write": false, 00:45:50.453 "abort": false, 00:45:50.453 "seek_hole": false, 00:45:50.453 "seek_data": false, 00:45:50.453 "copy": false, 00:45:50.453 "nvme_iov_md": false 00:45:50.453 }, 00:45:50.453 "driver_specific": { 00:45:50.453 "ftl": { 00:45:50.453 "base_bdev": "5dfb4179-26b1-4d56-8ec6-371986e48bc6", 00:45:50.453 "cache": "nvc0n1p0" 00:45:50.453 } 00:45:50.454 } 00:45:50.454 } 00:45:50.454 ]' 00:45:50.454 14:06:47 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:45:50.454 14:06:47 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:45:50.454 14:06:47 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:45:50.454 [2024-11-20 14:06:47.720036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.454 [2024-11-20 14:06:47.720296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:50.454 [2024-11-20 14:06:47.720330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:50.454 [2024-11-20 14:06:47.720351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.454 [2024-11-20 14:06:47.720409] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:45:50.454 [2024-11-20 14:06:47.725225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.454 [2024-11-20 14:06:47.725264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:50.454 [2024-11-20 14:06:47.725291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.787 ms 00:45:50.454 [2024-11-20 14:06:47.725305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.454 [2024-11-20 14:06:47.725873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.454 [2024-11-20 14:06:47.725895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:50.454 [2024-11-20 14:06:47.725914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:45:50.454 [2024-11-20 14:06:47.725926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.454 [2024-11-20 14:06:47.728839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.454 [2024-11-20 14:06:47.728870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:50.454 [2024-11-20 14:06:47.728888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.875 ms 00:45:50.454 [2024-11-20 14:06:47.728901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.454 [2024-11-20 14:06:47.734725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.454 [2024-11-20 14:06:47.734883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:50.454 [2024-11-20 14:06:47.734915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.780 ms 00:45:50.454 [2024-11-20 14:06:47.734928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.454 [2024-11-20 14:06:47.774011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.454 [2024-11-20 14:06:47.774087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:50.454 [2024-11-20 14:06:47.774114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.976 ms 00:45:50.454 [2024-11-20 14:06:47.774126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.714 [2024-11-20 14:06:47.797508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.714 [2024-11-20 14:06:47.797562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:50.714 [2024-11-20 14:06:47.797585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.269 ms 00:45:50.714 [2024-11-20 14:06:47.797602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.714 [2024-11-20 14:06:47.797833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.714 [2024-11-20 14:06:47.797849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:50.714 [2024-11-20 14:06:47.797866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:45:50.714 [2024-11-20 14:06:47.797879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.714 [2024-11-20 14:06:47.837897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.714 [2024-11-20 14:06:47.838123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:50.714 [2024-11-20 14:06:47.838157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.974 ms 00:45:50.714 [2024-11-20 14:06:47.838170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.714 [2024-11-20 14:06:47.875571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.714 [2024-11-20 14:06:47.875642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:50.714 [2024-11-20 14:06:47.875669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.305 ms 00:45:50.714 [2024-11-20 14:06:47.875682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.714 [2024-11-20 14:06:47.913450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.714 [2024-11-20 14:06:47.913535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:50.714 [2024-11-20 14:06:47.913576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.647 ms 00:45:50.714 [2024-11-20 14:06:47.913590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.714 [2024-11-20 14:06:47.951964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.714 [2024-11-20 14:06:47.952201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:50.714 [2024-11-20 14:06:47.952235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.218 ms 00:45:50.714 [2024-11-20 14:06:47.952248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.714 [2024-11-20 14:06:47.952342] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:50.714 [2024-11-20 14:06:47.952363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:50.714 [2024-11-20 14:06:47.952883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.952896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.952912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.952926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.952942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.952955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.952974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.952987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:50.715 [2024-11-20 14:06:47.953936] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:50.715 [2024-11-20 14:06:47.953954] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1f0a796-101e-4f30-a82e-71441876703e 00:45:50.715 [2024-11-20 14:06:47.953967] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:50.715 [2024-11-20 14:06:47.953983] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:50.715 [2024-11-20 14:06:47.953995] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:50.715 [2024-11-20 14:06:47.954016] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:50.715 [2024-11-20 14:06:47.954028] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:50.715 [2024-11-20 14:06:47.954044] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:50.715 [2024-11-20 14:06:47.954056] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:50.715 [2024-11-20 14:06:47.954070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:50.715 [2024-11-20 14:06:47.954081] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:50.715 [2024-11-20 14:06:47.954097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.715 [2024-11-20 14:06:47.954110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:50.715 [2024-11-20 14:06:47.954126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.759 ms 00:45:50.715 [2024-11-20 14:06:47.954139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.715 [2024-11-20 14:06:47.975256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.715 [2024-11-20 14:06:47.975427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:50.715 [2024-11-20 14:06:47.975624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.066 ms 00:45:50.715 [2024-11-20 14:06:47.975669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.716 [2024-11-20 14:06:47.976381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:50.716 [2024-11-20 14:06:47.976510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:50.716 [2024-11-20 14:06:47.976603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:45:50.716 [2024-11-20 14:06:47.976706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.974 [2024-11-20 14:06:48.046459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:50.974 [2024-11-20 14:06:48.046697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:50.974 [2024-11-20 14:06:48.046852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:50.974 [2024-11-20 14:06:48.046899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.974 [2024-11-20 14:06:48.047091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:50.974 [2024-11-20 14:06:48.047307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:50.974 [2024-11-20 14:06:48.047360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:50.974 [2024-11-20 14:06:48.047399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.974 [2024-11-20 14:06:48.047556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:50.974 [2024-11-20 14:06:48.047604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:50.974 [2024-11-20 14:06:48.047671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:50.974 [2024-11-20 14:06:48.047712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.974 [2024-11-20 14:06:48.047845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:50.974 [2024-11-20 14:06:48.047892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:50.974 [2024-11-20 14:06:48.047936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:50.974 [2024-11-20 14:06:48.047974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.974 [2024-11-20 14:06:48.184267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:50.974 [2024-11-20 14:06:48.184460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:50.974 [2024-11-20 14:06:48.184617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:50.974 [2024-11-20 14:06:48.184662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.974 [2024-11-20 14:06:48.290388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:50.974 [2024-11-20 14:06:48.290678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:50.974 [2024-11-20 14:06:48.290854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:50.974 [2024-11-20 14:06:48.290901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.974 [2024-11-20 14:06:48.291087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:50.974 [2024-11-20 14:06:48.291134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:50.974 [2024-11-20 14:06:48.291259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:50.974 [2024-11-20 14:06:48.291307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.974 [2024-11-20 14:06:48.291404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:50.974 [2024-11-20 14:06:48.291447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:50.974 [2024-11-20 14:06:48.291505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:50.974 [2024-11-20 14:06:48.291594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:50.975 [2024-11-20 14:06:48.291812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:50.975 [2024-11-20 14:06:48.291866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:51.233 [2024-11-20 14:06:48.293807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:51.233 [2024-11-20 14:06:48.293837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:51.233 [2024-11-20 14:06:48.293932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:51.233 [2024-11-20 14:06:48.293948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:51.233 [2024-11-20 14:06:48.293965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:51.233 [2024-11-20 14:06:48.293979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:51.233 [2024-11-20 14:06:48.294044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:51.233 [2024-11-20 14:06:48.294059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:51.233 [2024-11-20 14:06:48.294079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:51.233 [2024-11-20 14:06:48.294092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:51.233 [2024-11-20 14:06:48.294167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:51.233 [2024-11-20 14:06:48.294182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:51.233 [2024-11-20 14:06:48.294199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:51.233 [2024-11-20 14:06:48.294212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:51.233 [2024-11-20 14:06:48.294425] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 574.366 ms, result 0 00:45:51.233 true 00:45:51.233 14:06:48 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78727 00:45:51.233 14:06:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78727 ']' 00:45:51.233 14:06:48 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78727 00:45:51.233 14:06:48 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:45:51.233 14:06:48 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:51.233 14:06:48 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78727 00:45:51.233 killing process with pid 78727 00:45:51.233 14:06:48 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:51.233 14:06:48 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:51.233 14:06:48 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78727' 00:45:51.233 14:06:48 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78727 00:45:51.233 14:06:48 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78727 00:45:56.501 14:06:53 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:45:57.437 65536+0 records in 00:45:57.437 65536+0 records out 00:45:57.437 268435456 bytes (268 MB, 256 MiB) copied, 1.10787 s, 242 MB/s 00:45:57.437 14:06:54 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:57.437 [2024-11-20 14:06:54.663334] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:45:57.437 [2024-11-20 14:06:54.663464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78947 ] 00:45:57.695 [2024-11-20 14:06:54.830094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:57.695 [2024-11-20 14:06:54.982068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:58.265 [2024-11-20 14:06:55.361316] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:58.265 [2024-11-20 14:06:55.361388] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:58.265 [2024-11-20 14:06:55.526474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.265 [2024-11-20 14:06:55.526735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:58.265 [2024-11-20 14:06:55.526761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:45:58.265 [2024-11-20 14:06:55.526773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.265 [2024-11-20 14:06:55.530211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.265 [2024-11-20 14:06:55.530381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:58.265 [2024-11-20 14:06:55.530402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.410 ms 00:45:58.265 [2024-11-20 14:06:55.530413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.265 [2024-11-20 14:06:55.530603] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:58.265 [2024-11-20 14:06:55.531706] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:58.265 [2024-11-20 14:06:55.531734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.265 [2024-11-20 14:06:55.531745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:58.265 [2024-11-20 14:06:55.531756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:45:58.265 [2024-11-20 14:06:55.531775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.265 [2024-11-20 14:06:55.533252] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:58.265 [2024-11-20 14:06:55.553260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.265 [2024-11-20 14:06:55.553306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:58.265 [2024-11-20 14:06:55.553320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.009 ms 00:45:58.265 [2024-11-20 14:06:55.553332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.265 [2024-11-20 14:06:55.553433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.265 [2024-11-20 14:06:55.553447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:58.265 [2024-11-20 14:06:55.553459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:45:58.265 [2024-11-20 14:06:55.553470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.265 [2024-11-20 14:06:55.560217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.265 [2024-11-20 14:06:55.560374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:58.265 [2024-11-20 14:06:55.560394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.676 ms 00:45:58.265 [2024-11-20 14:06:55.560406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.265 [2024-11-20 14:06:55.560532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.265 [2024-11-20 14:06:55.560547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:58.265 [2024-11-20 14:06:55.560558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:45:58.265 [2024-11-20 14:06:55.560568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.265 [2024-11-20 14:06:55.560599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.265 [2024-11-20 14:06:55.560614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:58.265 [2024-11-20 14:06:55.560625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:45:58.265 [2024-11-20 14:06:55.560634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.265 [2024-11-20 14:06:55.560659] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:45:58.265 [2024-11-20 14:06:55.565406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.265 [2024-11-20 14:06:55.565438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:58.265 [2024-11-20 14:06:55.565451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.753 ms 00:45:58.265 [2024-11-20 14:06:55.565462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.265 [2024-11-20 14:06:55.565544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.265 [2024-11-20 14:06:55.565558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:58.265 [2024-11-20 14:06:55.565569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:45:58.265 [2024-11-20 14:06:55.565579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.265 [2024-11-20 14:06:55.565603] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:58.265 [2024-11-20 14:06:55.565630] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:45:58.265 [2024-11-20 14:06:55.565667] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:58.265 [2024-11-20 14:06:55.565685] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:45:58.265 [2024-11-20 14:06:55.565776] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:58.265 [2024-11-20 14:06:55.565790] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:58.265 [2024-11-20 14:06:55.565819] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:58.265 [2024-11-20 14:06:55.565833] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:58.265 [2024-11-20 14:06:55.565851] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:58.265 [2024-11-20 14:06:55.565863] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:45:58.265 [2024-11-20 14:06:55.565874] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:58.265 [2024-11-20 14:06:55.565884] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:58.265 [2024-11-20 14:06:55.565895] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:58.265 [2024-11-20 14:06:55.565906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.265 [2024-11-20 14:06:55.565917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:58.265 [2024-11-20 14:06:55.565928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:45:58.265 [2024-11-20 14:06:55.565938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.265 [2024-11-20 14:06:55.566028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.265 [2024-11-20 14:06:55.566059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:58.265 [2024-11-20 14:06:55.566070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:45:58.265 [2024-11-20 14:06:55.566080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.265 [2024-11-20 14:06:55.566175] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:58.265 [2024-11-20 14:06:55.566188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:58.265 [2024-11-20 14:06:55.566198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:58.265 [2024-11-20 14:06:55.566209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:58.265 [2024-11-20 14:06:55.566219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:58.265 [2024-11-20 14:06:55.566228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:58.265 [2024-11-20 14:06:55.566238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:45:58.265 [2024-11-20 14:06:55.566249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:58.265 [2024-11-20 14:06:55.566258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:45:58.265 [2024-11-20 14:06:55.566267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:58.265 [2024-11-20 14:06:55.566276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:58.265 [2024-11-20 14:06:55.566285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:45:58.265 [2024-11-20 14:06:55.566294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:58.265 [2024-11-20 14:06:55.566334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:58.265 [2024-11-20 14:06:55.566344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:45:58.265 [2024-11-20 14:06:55.566353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:58.265 [2024-11-20 14:06:55.566363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:58.265 [2024-11-20 14:06:55.566372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:45:58.265 [2024-11-20 14:06:55.566381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:58.265 [2024-11-20 14:06:55.566391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:58.265 [2024-11-20 14:06:55.566400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:45:58.265 [2024-11-20 14:06:55.566409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:58.265 [2024-11-20 14:06:55.566418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:58.265 [2024-11-20 14:06:55.566427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:45:58.265 [2024-11-20 14:06:55.566437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:58.265 [2024-11-20 14:06:55.566446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:58.265 [2024-11-20 14:06:55.566455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:45:58.265 [2024-11-20 14:06:55.566465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:58.265 [2024-11-20 14:06:55.566474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:58.265 [2024-11-20 14:06:55.566495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:45:58.265 [2024-11-20 14:06:55.566504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:58.265 [2024-11-20 14:06:55.566513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:58.265 [2024-11-20 14:06:55.566523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:45:58.265 [2024-11-20 14:06:55.566533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:58.265 [2024-11-20 14:06:55.566542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:58.265 [2024-11-20 14:06:55.566552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:45:58.266 [2024-11-20 14:06:55.566561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:58.266 [2024-11-20 14:06:55.566570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:58.266 [2024-11-20 14:06:55.566579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:45:58.266 [2024-11-20 14:06:55.566590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:58.266 [2024-11-20 14:06:55.566600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:58.266 [2024-11-20 14:06:55.566609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:45:58.266 [2024-11-20 14:06:55.566618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:58.266 [2024-11-20 14:06:55.566627] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:58.266 [2024-11-20 14:06:55.566637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:58.266 [2024-11-20 14:06:55.566647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:58.266 [2024-11-20 14:06:55.566663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:58.266 [2024-11-20 14:06:55.566674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:58.266 [2024-11-20 14:06:55.566683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:58.266 [2024-11-20 14:06:55.566693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:58.266 [2024-11-20 14:06:55.566703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:58.266 [2024-11-20 14:06:55.566712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:58.266 [2024-11-20 14:06:55.566721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:58.266 [2024-11-20 14:06:55.566731] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:58.266 [2024-11-20 14:06:55.566744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:58.266 [2024-11-20 14:06:55.566755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:45:58.266 [2024-11-20 14:06:55.566766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:45:58.266 [2024-11-20 14:06:55.566777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:45:58.266 [2024-11-20 14:06:55.566788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:45:58.266 [2024-11-20 14:06:55.566798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:45:58.266 [2024-11-20 14:06:55.566809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:45:58.266 [2024-11-20 14:06:55.566819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:45:58.266 [2024-11-20 14:06:55.566830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:45:58.266 [2024-11-20 14:06:55.566840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:45:58.266 [2024-11-20 14:06:55.566851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:45:58.266 [2024-11-20 14:06:55.566861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:45:58.266 [2024-11-20 14:06:55.566872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:45:58.266 [2024-11-20 14:06:55.566883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:45:58.266 [2024-11-20 14:06:55.566893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:45:58.266 [2024-11-20 14:06:55.566904] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:58.266 [2024-11-20 14:06:55.566915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:58.266 [2024-11-20 14:06:55.566926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:58.266 [2024-11-20 14:06:55.566937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:58.266 [2024-11-20 14:06:55.566947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:58.266 [2024-11-20 14:06:55.566957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:58.266 [2024-11-20 14:06:55.566969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.266 [2024-11-20 14:06:55.566979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:58.266 [2024-11-20 14:06:55.566997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:45:58.266 [2024-11-20 14:06:55.567006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.526 [2024-11-20 14:06:55.609253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.526 [2024-11-20 14:06:55.609474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:58.526 [2024-11-20 14:06:55.609651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.186 ms 00:45:58.526 [2024-11-20 14:06:55.609691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.526 [2024-11-20 14:06:55.609903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.526 [2024-11-20 14:06:55.609961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:58.526 [2024-11-20 14:06:55.610067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:45:58.526 [2024-11-20 14:06:55.610106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.526 [2024-11-20 14:06:55.690363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.526 [2024-11-20 14:06:55.690665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:58.526 [2024-11-20 14:06:55.690825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.193 ms 00:45:58.526 [2024-11-20 14:06:55.690905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.526 [2024-11-20 14:06:55.691138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.526 [2024-11-20 14:06:55.691203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:58.526 [2024-11-20 14:06:55.691254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:45:58.526 [2024-11-20 14:06:55.691367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.526 [2024-11-20 14:06:55.691970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.526 [2024-11-20 14:06:55.691994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:58.526 [2024-11-20 14:06:55.692011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:45:58.526 [2024-11-20 14:06:55.692045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.526 [2024-11-20 14:06:55.692226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.526 [2024-11-20 14:06:55.692247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:58.526 [2024-11-20 14:06:55.692264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:45:58.526 [2024-11-20 14:06:55.692280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.526 [2024-11-20 14:06:55.722268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.526 [2024-11-20 14:06:55.722325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:58.526 [2024-11-20 14:06:55.722348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.953 ms 00:45:58.526 [2024-11-20 14:06:55.722364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.526 [2024-11-20 14:06:55.753277] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:45:58.526 [2024-11-20 14:06:55.753516] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:58.526 [2024-11-20 14:06:55.753549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.526 [2024-11-20 14:06:55.753567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:58.526 [2024-11-20 14:06:55.753587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.975 ms 00:45:58.526 [2024-11-20 14:06:55.753602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.526 [2024-11-20 14:06:55.802251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.526 [2024-11-20 14:06:55.802318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:58.526 [2024-11-20 14:06:55.802359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.509 ms 00:45:58.526 [2024-11-20 14:06:55.802377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.526 [2024-11-20 14:06:55.832825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.526 [2024-11-20 14:06:55.833045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:58.526 [2024-11-20 14:06:55.833079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.253 ms 00:45:58.526 [2024-11-20 14:06:55.833096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.785 [2024-11-20 14:06:55.857136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.786 [2024-11-20 14:06:55.857178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:58.786 [2024-11-20 14:06:55.857192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.912 ms 00:45:58.786 [2024-11-20 14:06:55.857203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.786 [2024-11-20 14:06:55.858012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.786 [2024-11-20 14:06:55.858039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:58.786 [2024-11-20 14:06:55.858052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:45:58.786 [2024-11-20 14:06:55.858063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.786 [2024-11-20 14:06:55.948894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.786 [2024-11-20 14:06:55.948959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:58.786 [2024-11-20 14:06:55.948977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.797 ms 00:45:58.786 [2024-11-20 14:06:55.948988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.786 [2024-11-20 14:06:55.960820] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:45:58.786 [2024-11-20 14:06:55.977350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.786 [2024-11-20 14:06:55.977411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:58.786 [2024-11-20 14:06:55.977428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.209 ms 00:45:58.786 [2024-11-20 14:06:55.977440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.786 [2024-11-20 14:06:55.977584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.786 [2024-11-20 14:06:55.977604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:58.786 [2024-11-20 14:06:55.977615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:45:58.786 [2024-11-20 14:06:55.977625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.786 [2024-11-20 14:06:55.977682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.786 [2024-11-20 14:06:55.977693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:58.786 [2024-11-20 14:06:55.977704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:45:58.786 [2024-11-20 14:06:55.977714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.786 [2024-11-20 14:06:55.977750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.786 [2024-11-20 14:06:55.977764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:58.786 [2024-11-20 14:06:55.977780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:45:58.786 [2024-11-20 14:06:55.977790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.786 [2024-11-20 14:06:55.977828] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:58.786 [2024-11-20 14:06:55.977841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.786 [2024-11-20 14:06:55.977851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:58.786 [2024-11-20 14:06:55.977861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:45:58.786 [2024-11-20 14:06:55.977871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.786 [2024-11-20 14:06:56.015546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.786 [2024-11-20 14:06:56.015598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:58.786 [2024-11-20 14:06:56.015612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.653 ms 00:45:58.786 [2024-11-20 14:06:56.015623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.786 [2024-11-20 14:06:56.015742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.786 [2024-11-20 14:06:56.015757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:58.786 [2024-11-20 14:06:56.015777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:45:58.786 [2024-11-20 14:06:56.015787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.786 [2024-11-20 14:06:56.016905] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:58.786 [2024-11-20 14:06:56.021894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 489.988 ms, result 0 00:45:58.786 [2024-11-20 14:06:56.022709] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:58.786 [2024-11-20 14:06:56.041731] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:00.164  [2024-11-20T14:06:58.056Z] Copying: 27/256 [MB] (27 MBps) [2024-11-20T14:06:59.433Z] Copying: 52/256 [MB] (24 MBps) [2024-11-20T14:07:00.367Z] Copying: 79/256 [MB] (27 MBps) [2024-11-20T14:07:01.303Z] Copying: 106/256 [MB] (27 MBps) [2024-11-20T14:07:02.238Z] Copying: 133/256 [MB] (27 MBps) [2024-11-20T14:07:03.220Z] Copying: 161/256 [MB] (27 MBps) [2024-11-20T14:07:04.158Z] Copying: 189/256 [MB] (28 MBps) [2024-11-20T14:07:05.094Z] Copying: 214/256 [MB] (25 MBps) [2024-11-20T14:07:05.662Z] Copying: 241/256 [MB] (26 MBps) [2024-11-20T14:07:05.662Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-20 14:07:05.570613] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:08.339 [2024-11-20 14:07:05.586825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.339 [2024-11-20 14:07:05.586873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:08.339 [2024-11-20 14:07:05.586889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:46:08.339 [2024-11-20 14:07:05.586902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.339 [2024-11-20 14:07:05.586937] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:46:08.339 [2024-11-20 14:07:05.591344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.339 [2024-11-20 14:07:05.591369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:08.339 [2024-11-20 14:07:05.591382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.389 ms 00:46:08.339 [2024-11-20 14:07:05.591392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.339 [2024-11-20 14:07:05.593617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.339 [2024-11-20 14:07:05.593654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:08.339 [2024-11-20 14:07:05.593668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.199 ms 00:46:08.339 [2024-11-20 14:07:05.593680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.339 [2024-11-20 14:07:05.600166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.339 [2024-11-20 14:07:05.600201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:08.339 [2024-11-20 14:07:05.600222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.465 ms 00:46:08.339 [2024-11-20 14:07:05.600233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.339 [2024-11-20 14:07:05.606229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.339 [2024-11-20 14:07:05.606261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:46:08.339 [2024-11-20 14:07:05.606273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.935 ms 00:46:08.339 [2024-11-20 14:07:05.606283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.339 [2024-11-20 14:07:05.644753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.339 [2024-11-20 14:07:05.644800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:08.339 [2024-11-20 14:07:05.644815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.417 ms 00:46:08.339 [2024-11-20 14:07:05.644826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.599 [2024-11-20 14:07:05.667746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.599 [2024-11-20 14:07:05.667812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:08.600 [2024-11-20 14:07:05.667835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.843 ms 00:46:08.600 [2024-11-20 14:07:05.667850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.600 [2024-11-20 14:07:05.668005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.600 [2024-11-20 14:07:05.668019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:08.600 [2024-11-20 14:07:05.668031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:46:08.600 [2024-11-20 14:07:05.668041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.600 [2024-11-20 14:07:05.707775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.600 [2024-11-20 14:07:05.707824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:46:08.600 [2024-11-20 14:07:05.707840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.713 ms 00:46:08.600 [2024-11-20 14:07:05.707850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.600 [2024-11-20 14:07:05.745096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.600 [2024-11-20 14:07:05.745142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:46:08.600 [2024-11-20 14:07:05.745157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.167 ms 00:46:08.600 [2024-11-20 14:07:05.745168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.600 [2024-11-20 14:07:05.784055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.600 [2024-11-20 14:07:05.784104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:08.600 [2024-11-20 14:07:05.784136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.814 ms 00:46:08.600 [2024-11-20 14:07:05.784147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.600 [2024-11-20 14:07:05.824220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.600 [2024-11-20 14:07:05.824274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:08.600 [2024-11-20 14:07:05.824308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.956 ms 00:46:08.600 [2024-11-20 14:07:05.824320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.600 [2024-11-20 14:07:05.824403] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:08.600 [2024-11-20 14:07:05.824445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.824997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.825009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.825020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.825031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.825043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.825054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.825065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.825077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.825088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.825100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.825111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:08.600 [2024-11-20 14:07:05.825123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:08.601 [2024-11-20 14:07:05.825695] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:08.601 [2024-11-20 14:07:05.825705] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1f0a796-101e-4f30-a82e-71441876703e 00:46:08.601 [2024-11-20 14:07:05.825717] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:46:08.601 [2024-11-20 14:07:05.825728] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:46:08.601 [2024-11-20 14:07:05.825739] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:46:08.601 [2024-11-20 14:07:05.825750] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:46:08.601 [2024-11-20 14:07:05.825761] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:08.601 [2024-11-20 14:07:05.825772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:08.601 [2024-11-20 14:07:05.825783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:08.601 [2024-11-20 14:07:05.825793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:08.601 [2024-11-20 14:07:05.825803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:08.601 [2024-11-20 14:07:05.825814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.601 [2024-11-20 14:07:05.825825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:08.601 [2024-11-20 14:07:05.825844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:46:08.601 [2024-11-20 14:07:05.825855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.601 [2024-11-20 14:07:05.848174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.601 [2024-11-20 14:07:05.848219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:08.601 [2024-11-20 14:07:05.848234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.292 ms 00:46:08.601 [2024-11-20 14:07:05.848246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.601 [2024-11-20 14:07:05.848918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.601 [2024-11-20 14:07:05.848948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:08.601 [2024-11-20 14:07:05.848960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:46:08.601 [2024-11-20 14:07:05.848970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.601 [2024-11-20 14:07:05.907490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:08.601 [2024-11-20 14:07:05.907549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:08.601 [2024-11-20 14:07:05.907564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:08.601 [2024-11-20 14:07:05.907574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.601 [2024-11-20 14:07:05.907712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:08.601 [2024-11-20 14:07:05.907731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:08.601 [2024-11-20 14:07:05.907742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:08.601 [2024-11-20 14:07:05.907752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.601 [2024-11-20 14:07:05.907841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:08.601 [2024-11-20 14:07:05.907854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:08.601 [2024-11-20 14:07:05.907865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:08.601 [2024-11-20 14:07:05.907875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.601 [2024-11-20 14:07:05.907895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:08.601 [2024-11-20 14:07:05.907906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:08.601 [2024-11-20 14:07:05.907925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:08.601 [2024-11-20 14:07:05.907936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.872 [2024-11-20 14:07:06.036844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:08.872 [2024-11-20 14:07:06.036905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:08.873 [2024-11-20 14:07:06.036921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:08.873 [2024-11-20 14:07:06.036932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.873 [2024-11-20 14:07:06.145263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:08.873 [2024-11-20 14:07:06.145340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:08.873 [2024-11-20 14:07:06.145355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:08.873 [2024-11-20 14:07:06.145366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.873 [2024-11-20 14:07:06.145475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:08.873 [2024-11-20 14:07:06.145506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:08.873 [2024-11-20 14:07:06.145517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:08.873 [2024-11-20 14:07:06.145527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.873 [2024-11-20 14:07:06.145557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:08.873 [2024-11-20 14:07:06.145568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:08.873 [2024-11-20 14:07:06.145579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:08.873 [2024-11-20 14:07:06.145593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.873 [2024-11-20 14:07:06.145702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:08.873 [2024-11-20 14:07:06.145716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:08.873 [2024-11-20 14:07:06.145727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:08.873 [2024-11-20 14:07:06.145737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.873 [2024-11-20 14:07:06.145775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:08.873 [2024-11-20 14:07:06.145787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:08.873 [2024-11-20 14:07:06.145797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:08.873 [2024-11-20 14:07:06.145807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.873 [2024-11-20 14:07:06.145851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:08.873 [2024-11-20 14:07:06.145863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:08.873 [2024-11-20 14:07:06.145873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:08.873 [2024-11-20 14:07:06.145883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.873 [2024-11-20 14:07:06.145927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:08.873 [2024-11-20 14:07:06.145939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:08.873 [2024-11-20 14:07:06.145949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:08.873 [2024-11-20 14:07:06.145963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.873 [2024-11-20 14:07:06.146104] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 559.275 ms, result 0 00:46:10.256 00:46:10.256 00:46:10.256 14:07:07 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79073 00:46:10.256 14:07:07 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:46:10.256 14:07:07 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79073 00:46:10.256 14:07:07 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79073 ']' 00:46:10.256 14:07:07 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:10.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:10.256 14:07:07 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:10.256 14:07:07 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:10.256 14:07:07 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:10.256 14:07:07 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:46:10.514 [2024-11-20 14:07:07.592673] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:46:10.514 [2024-11-20 14:07:07.592824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79073 ] 00:46:10.514 [2024-11-20 14:07:07.764509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:10.772 [2024-11-20 14:07:07.885626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:11.710 14:07:08 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:11.710 14:07:08 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:46:11.710 14:07:08 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:46:11.969 [2024-11-20 14:07:09.099960] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:11.969 [2024-11-20 14:07:09.100043] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:11.969 [2024-11-20 14:07:09.262867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:11.969 [2024-11-20 14:07:09.262939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:11.969 [2024-11-20 14:07:09.262958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:46:11.969 [2024-11-20 14:07:09.262970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:11.969 [2024-11-20 14:07:09.266569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:11.969 [2024-11-20 14:07:09.266619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:11.969 [2024-11-20 14:07:09.266637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.573 ms 00:46:11.969 [2024-11-20 14:07:09.266665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:11.969 [2024-11-20 14:07:09.266818] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:11.969 [2024-11-20 14:07:09.267959] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:11.969 [2024-11-20 14:07:09.267994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:11.969 [2024-11-20 14:07:09.268006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:11.969 [2024-11-20 14:07:09.268022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.191 ms 00:46:11.969 [2024-11-20 14:07:09.268033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:11.969 [2024-11-20 14:07:09.269869] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:46:12.228 [2024-11-20 14:07:09.292319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.228 [2024-11-20 14:07:09.292398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:46:12.228 [2024-11-20 14:07:09.292415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.452 ms 00:46:12.228 [2024-11-20 14:07:09.292429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.228 [2024-11-20 14:07:09.292624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.228 [2024-11-20 14:07:09.292646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:46:12.228 [2024-11-20 14:07:09.292659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:46:12.228 [2024-11-20 14:07:09.292673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.228 [2024-11-20 14:07:09.300248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.228 [2024-11-20 14:07:09.300308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:12.228 [2024-11-20 14:07:09.300321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.512 ms 00:46:12.228 [2024-11-20 14:07:09.300335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.228 [2024-11-20 14:07:09.300514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.228 [2024-11-20 14:07:09.300554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:12.228 [2024-11-20 14:07:09.300567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:46:12.228 [2024-11-20 14:07:09.300583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.228 [2024-11-20 14:07:09.300628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.228 [2024-11-20 14:07:09.300646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:12.228 [2024-11-20 14:07:09.300658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:46:12.228 [2024-11-20 14:07:09.300673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.228 [2024-11-20 14:07:09.300704] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:46:12.228 [2024-11-20 14:07:09.306047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.228 [2024-11-20 14:07:09.306093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:12.228 [2024-11-20 14:07:09.306108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.346 ms 00:46:12.228 [2024-11-20 14:07:09.306118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.228 [2024-11-20 14:07:09.306217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.228 [2024-11-20 14:07:09.306230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:12.228 [2024-11-20 14:07:09.306244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:46:12.228 [2024-11-20 14:07:09.306256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.228 [2024-11-20 14:07:09.306300] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:46:12.228 [2024-11-20 14:07:09.306323] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:46:12.228 [2024-11-20 14:07:09.306376] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:46:12.228 [2024-11-20 14:07:09.306399] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:46:12.228 [2024-11-20 14:07:09.306521] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:46:12.228 [2024-11-20 14:07:09.306542] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:12.228 [2024-11-20 14:07:09.306565] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:46:12.228 [2024-11-20 14:07:09.306580] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:12.228 [2024-11-20 14:07:09.306596] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:12.228 [2024-11-20 14:07:09.306608] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:46:12.228 [2024-11-20 14:07:09.306622] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:12.228 [2024-11-20 14:07:09.306633] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:46:12.228 [2024-11-20 14:07:09.306649] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:46:12.228 [2024-11-20 14:07:09.306664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.228 [2024-11-20 14:07:09.306678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:12.229 [2024-11-20 14:07:09.306689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:46:12.229 [2024-11-20 14:07:09.306703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.229 [2024-11-20 14:07:09.306792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.229 [2024-11-20 14:07:09.306810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:12.229 [2024-11-20 14:07:09.306821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:46:12.229 [2024-11-20 14:07:09.306834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.229 [2024-11-20 14:07:09.306945] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:12.229 [2024-11-20 14:07:09.306962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:12.229 [2024-11-20 14:07:09.306974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:12.229 [2024-11-20 14:07:09.306987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:12.229 [2024-11-20 14:07:09.306997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:12.229 [2024-11-20 14:07:09.307009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:12.229 [2024-11-20 14:07:09.307019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:46:12.229 [2024-11-20 14:07:09.307035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:12.229 [2024-11-20 14:07:09.307044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:46:12.229 [2024-11-20 14:07:09.307057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:12.229 [2024-11-20 14:07:09.307066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:12.229 [2024-11-20 14:07:09.307078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:46:12.229 [2024-11-20 14:07:09.307087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:12.229 [2024-11-20 14:07:09.307099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:12.229 [2024-11-20 14:07:09.307109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:46:12.229 [2024-11-20 14:07:09.307121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:12.229 [2024-11-20 14:07:09.307130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:12.229 [2024-11-20 14:07:09.307148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:46:12.229 [2024-11-20 14:07:09.307157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:12.229 [2024-11-20 14:07:09.307172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:12.229 [2024-11-20 14:07:09.307195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:46:12.229 [2024-11-20 14:07:09.307210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:12.229 [2024-11-20 14:07:09.307220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:12.229 [2024-11-20 14:07:09.307239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:46:12.229 [2024-11-20 14:07:09.307248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:12.229 [2024-11-20 14:07:09.307263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:12.229 [2024-11-20 14:07:09.307272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:46:12.229 [2024-11-20 14:07:09.307286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:12.229 [2024-11-20 14:07:09.307296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:12.229 [2024-11-20 14:07:09.307310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:46:12.229 [2024-11-20 14:07:09.307319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:12.229 [2024-11-20 14:07:09.307333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:12.229 [2024-11-20 14:07:09.307343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:46:12.229 [2024-11-20 14:07:09.307358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:12.229 [2024-11-20 14:07:09.307368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:12.229 [2024-11-20 14:07:09.307382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:46:12.229 [2024-11-20 14:07:09.307392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:12.229 [2024-11-20 14:07:09.307406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:46:12.229 [2024-11-20 14:07:09.307416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:46:12.229 [2024-11-20 14:07:09.307434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:12.229 [2024-11-20 14:07:09.307443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:46:12.229 [2024-11-20 14:07:09.307457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:46:12.229 [2024-11-20 14:07:09.307467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:12.229 [2024-11-20 14:07:09.307487] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:12.229 [2024-11-20 14:07:09.307502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:12.229 [2024-11-20 14:07:09.307514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:12.229 [2024-11-20 14:07:09.307524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:12.229 [2024-11-20 14:07:09.307536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:12.229 [2024-11-20 14:07:09.307546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:12.229 [2024-11-20 14:07:09.307558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:12.229 [2024-11-20 14:07:09.307567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:12.229 [2024-11-20 14:07:09.307579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:12.229 [2024-11-20 14:07:09.307588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:12.229 [2024-11-20 14:07:09.307602] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:12.229 [2024-11-20 14:07:09.307615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:12.229 [2024-11-20 14:07:09.307631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:46:12.229 [2024-11-20 14:07:09.307642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:46:12.229 [2024-11-20 14:07:09.307656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:46:12.229 [2024-11-20 14:07:09.307668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:46:12.229 [2024-11-20 14:07:09.307681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:46:12.229 [2024-11-20 14:07:09.307692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:46:12.229 [2024-11-20 14:07:09.307704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:46:12.229 [2024-11-20 14:07:09.307714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:46:12.229 [2024-11-20 14:07:09.307727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:46:12.229 [2024-11-20 14:07:09.307738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:46:12.229 [2024-11-20 14:07:09.307750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:46:12.229 [2024-11-20 14:07:09.307774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:46:12.229 [2024-11-20 14:07:09.307804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:46:12.229 [2024-11-20 14:07:09.307815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:46:12.229 [2024-11-20 14:07:09.307831] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:12.229 [2024-11-20 14:07:09.307843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:12.229 [2024-11-20 14:07:09.307868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:12.229 [2024-11-20 14:07:09.307880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:12.229 [2024-11-20 14:07:09.307896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:12.229 [2024-11-20 14:07:09.307909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:12.229 [2024-11-20 14:07:09.307926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.229 [2024-11-20 14:07:09.307938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:12.229 [2024-11-20 14:07:09.307954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:46:12.229 [2024-11-20 14:07:09.307965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.229 [2024-11-20 14:07:09.349761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.229 [2024-11-20 14:07:09.349819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:12.229 [2024-11-20 14:07:09.349841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.711 ms 00:46:12.229 [2024-11-20 14:07:09.349858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.229 [2024-11-20 14:07:09.350055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.229 [2024-11-20 14:07:09.350077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:12.229 [2024-11-20 14:07:09.350093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:46:12.229 [2024-11-20 14:07:09.350104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.229 [2024-11-20 14:07:09.399470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.229 [2024-11-20 14:07:09.399545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:12.229 [2024-11-20 14:07:09.399565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.332 ms 00:46:12.229 [2024-11-20 14:07:09.399576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.229 [2024-11-20 14:07:09.399710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.229 [2024-11-20 14:07:09.399722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:12.229 [2024-11-20 14:07:09.399736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:46:12.230 [2024-11-20 14:07:09.399747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.230 [2024-11-20 14:07:09.400229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.230 [2024-11-20 14:07:09.400252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:12.230 [2024-11-20 14:07:09.400270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:46:12.230 [2024-11-20 14:07:09.400281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.230 [2024-11-20 14:07:09.400415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.230 [2024-11-20 14:07:09.400435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:12.230 [2024-11-20 14:07:09.400450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:46:12.230 [2024-11-20 14:07:09.400461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.230 [2024-11-20 14:07:09.423399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.230 [2024-11-20 14:07:09.423465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:12.230 [2024-11-20 14:07:09.423501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.895 ms 00:46:12.230 [2024-11-20 14:07:09.423514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.230 [2024-11-20 14:07:09.459381] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:46:12.230 [2024-11-20 14:07:09.459448] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:46:12.230 [2024-11-20 14:07:09.459471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.230 [2024-11-20 14:07:09.459494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:46:12.230 [2024-11-20 14:07:09.459511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.786 ms 00:46:12.230 [2024-11-20 14:07:09.459523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.230 [2024-11-20 14:07:09.492557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.230 [2024-11-20 14:07:09.492645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:46:12.230 [2024-11-20 14:07:09.492665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.876 ms 00:46:12.230 [2024-11-20 14:07:09.492676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.230 [2024-11-20 14:07:09.514295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.230 [2024-11-20 14:07:09.514368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:46:12.230 [2024-11-20 14:07:09.514392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.441 ms 00:46:12.230 [2024-11-20 14:07:09.514402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.230 [2024-11-20 14:07:09.535198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.230 [2024-11-20 14:07:09.535271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:46:12.230 [2024-11-20 14:07:09.535294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.606 ms 00:46:12.230 [2024-11-20 14:07:09.535305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.230 [2024-11-20 14:07:09.536276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.230 [2024-11-20 14:07:09.536311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:12.230 [2024-11-20 14:07:09.536330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:46:12.230 [2024-11-20 14:07:09.536342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.489 [2024-11-20 14:07:09.633677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.489 [2024-11-20 14:07:09.633774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:46:12.489 [2024-11-20 14:07:09.633797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.283 ms 00:46:12.489 [2024-11-20 14:07:09.633810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.489 [2024-11-20 14:07:09.648436] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:46:12.489 [2024-11-20 14:07:09.665978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.489 [2024-11-20 14:07:09.666062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:12.489 [2024-11-20 14:07:09.666079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.970 ms 00:46:12.489 [2024-11-20 14:07:09.666092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.489 [2024-11-20 14:07:09.666212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.489 [2024-11-20 14:07:09.666229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:46:12.489 [2024-11-20 14:07:09.666240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:46:12.489 [2024-11-20 14:07:09.666253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.489 [2024-11-20 14:07:09.666311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.489 [2024-11-20 14:07:09.666326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:12.489 [2024-11-20 14:07:09.666340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:46:12.489 [2024-11-20 14:07:09.666352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.489 [2024-11-20 14:07:09.666378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.489 [2024-11-20 14:07:09.666395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:12.489 [2024-11-20 14:07:09.666405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:46:12.489 [2024-11-20 14:07:09.666418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.489 [2024-11-20 14:07:09.666454] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:46:12.489 [2024-11-20 14:07:09.666472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.489 [2024-11-20 14:07:09.666508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:46:12.489 [2024-11-20 14:07:09.666521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:46:12.489 [2024-11-20 14:07:09.666535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.489 [2024-11-20 14:07:09.707299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.489 [2024-11-20 14:07:09.707376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:12.489 [2024-11-20 14:07:09.707397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.723 ms 00:46:12.489 [2024-11-20 14:07:09.707409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.489 [2024-11-20 14:07:09.707611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.489 [2024-11-20 14:07:09.707627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:12.489 [2024-11-20 14:07:09.707644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:46:12.489 [2024-11-20 14:07:09.707655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.489 [2024-11-20 14:07:09.708732] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:12.489 [2024-11-20 14:07:09.713965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 445.505 ms, result 0 00:46:12.489 [2024-11-20 14:07:09.715551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:12.489 Some configs were skipped because the RPC state that can call them passed over. 00:46:12.489 14:07:09 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:46:12.749 [2024-11-20 14:07:09.949358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.749 [2024-11-20 14:07:09.949443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:46:12.749 [2024-11-20 14:07:09.949479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.520 ms 00:46:12.749 [2024-11-20 14:07:09.949510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.749 [2024-11-20 14:07:09.949560] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.731 ms, result 0 00:46:12.749 true 00:46:12.749 14:07:09 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:46:13.009 [2024-11-20 14:07:10.213433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:13.009 [2024-11-20 14:07:10.213518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:46:13.009 [2024-11-20 14:07:10.213540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:46:13.009 [2024-11-20 14:07:10.213553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:13.009 [2024-11-20 14:07:10.213603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.467 ms, result 0 00:46:13.009 true 00:46:13.009 14:07:10 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79073 00:46:13.009 14:07:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79073 ']' 00:46:13.009 14:07:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79073 00:46:13.009 14:07:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:46:13.009 14:07:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:13.009 14:07:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79073 00:46:13.009 killing process with pid 79073 00:46:13.009 14:07:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:13.009 14:07:10 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:13.009 14:07:10 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79073' 00:46:13.009 14:07:10 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79073 00:46:13.009 14:07:10 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79073 00:46:14.388 [2024-11-20 14:07:11.460968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.388 [2024-11-20 14:07:11.461044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:14.388 [2024-11-20 14:07:11.461062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:46:14.388 [2024-11-20 14:07:11.461078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.388 [2024-11-20 14:07:11.461101] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:46:14.388 [2024-11-20 14:07:11.465622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.388 [2024-11-20 14:07:11.465664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:14.388 [2024-11-20 14:07:11.465684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.494 ms 00:46:14.388 [2024-11-20 14:07:11.465695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.388 [2024-11-20 14:07:11.465976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.388 [2024-11-20 14:07:11.465990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:14.388 [2024-11-20 14:07:11.466004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:46:14.388 [2024-11-20 14:07:11.466015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.388 [2024-11-20 14:07:11.469535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.388 [2024-11-20 14:07:11.469584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:14.388 [2024-11-20 14:07:11.469601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.492 ms 00:46:14.388 [2024-11-20 14:07:11.469613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.388 [2024-11-20 14:07:11.476256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.388 [2024-11-20 14:07:11.476306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:46:14.388 [2024-11-20 14:07:11.476324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.593 ms 00:46:14.388 [2024-11-20 14:07:11.476335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.388 [2024-11-20 14:07:11.492963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.388 [2024-11-20 14:07:11.493032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:14.388 [2024-11-20 14:07:11.493058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.505 ms 00:46:14.388 [2024-11-20 14:07:11.493083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.388 [2024-11-20 14:07:11.504848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.388 [2024-11-20 14:07:11.504924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:14.388 [2024-11-20 14:07:11.504944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.671 ms 00:46:14.388 [2024-11-20 14:07:11.504956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.388 [2024-11-20 14:07:11.505129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.388 [2024-11-20 14:07:11.505144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:14.388 [2024-11-20 14:07:11.505158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:46:14.388 [2024-11-20 14:07:11.505169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.388 [2024-11-20 14:07:11.522707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.388 [2024-11-20 14:07:11.522779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:46:14.388 [2024-11-20 14:07:11.522799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.496 ms 00:46:14.388 [2024-11-20 14:07:11.522809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.388 [2024-11-20 14:07:11.539944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.388 [2024-11-20 14:07:11.540013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:46:14.388 [2024-11-20 14:07:11.540037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.028 ms 00:46:14.388 [2024-11-20 14:07:11.540048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.388 [2024-11-20 14:07:11.556476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.388 [2024-11-20 14:07:11.556573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:14.388 [2024-11-20 14:07:11.556602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.344 ms 00:46:14.388 [2024-11-20 14:07:11.556615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.388 [2024-11-20 14:07:11.573529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.388 [2024-11-20 14:07:11.573603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:14.388 [2024-11-20 14:07:11.573624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.738 ms 00:46:14.388 [2024-11-20 14:07:11.573635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.388 [2024-11-20 14:07:11.573714] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:14.388 [2024-11-20 14:07:11.573737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:46:14.388 [2024-11-20 14:07:11.573758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:46:14.388 [2024-11-20 14:07:11.573771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:14.388 [2024-11-20 14:07:11.573786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:14.388 [2024-11-20 14:07:11.573798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:14.388 [2024-11-20 14:07:11.573817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:14.388 [2024-11-20 14:07:11.573829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:14.388 [2024-11-20 14:07:11.573844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:14.388 [2024-11-20 14:07:11.573856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:14.388 [2024-11-20 14:07:11.573871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:14.388 [2024-11-20 14:07:11.573883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:14.388 [2024-11-20 14:07:11.573898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:14.388 [2024-11-20 14:07:11.573909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:14.388 [2024-11-20 14:07:11.573924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.573935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.573950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.573961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.573979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.573991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.574994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:14.389 [2024-11-20 14:07:11.575293] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:14.389 [2024-11-20 14:07:11.575311] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1f0a796-101e-4f30-a82e-71441876703e 00:46:14.389 [2024-11-20 14:07:11.575344] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:46:14.389 [2024-11-20 14:07:11.575359] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:46:14.390 [2024-11-20 14:07:11.575371] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:46:14.390 [2024-11-20 14:07:11.575386] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:46:14.390 [2024-11-20 14:07:11.575398] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:14.390 [2024-11-20 14:07:11.575414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:14.390 [2024-11-20 14:07:11.575425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:14.390 [2024-11-20 14:07:11.575439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:14.390 [2024-11-20 14:07:11.575450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:14.390 [2024-11-20 14:07:11.575466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.390 [2024-11-20 14:07:11.575502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:14.390 [2024-11-20 14:07:11.575519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.754 ms 00:46:14.390 [2024-11-20 14:07:11.575533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.390 [2024-11-20 14:07:11.597838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.390 [2024-11-20 14:07:11.597900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:14.390 [2024-11-20 14:07:11.597922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.260 ms 00:46:14.390 [2024-11-20 14:07:11.597933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.390 [2024-11-20 14:07:11.598577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.390 [2024-11-20 14:07:11.598595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:14.390 [2024-11-20 14:07:11.598612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:46:14.390 [2024-11-20 14:07:11.598622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.390 [2024-11-20 14:07:11.674003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:14.390 [2024-11-20 14:07:11.674066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:14.390 [2024-11-20 14:07:11.674085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:14.390 [2024-11-20 14:07:11.674096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.390 [2024-11-20 14:07:11.674256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:14.390 [2024-11-20 14:07:11.674269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:14.390 [2024-11-20 14:07:11.674287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:14.390 [2024-11-20 14:07:11.674297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.390 [2024-11-20 14:07:11.674364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:14.390 [2024-11-20 14:07:11.674377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:14.390 [2024-11-20 14:07:11.674394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:14.390 [2024-11-20 14:07:11.674404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.390 [2024-11-20 14:07:11.674427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:14.390 [2024-11-20 14:07:11.674438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:14.390 [2024-11-20 14:07:11.674450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:14.390 [2024-11-20 14:07:11.674463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.650 [2024-11-20 14:07:11.807241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:14.650 [2024-11-20 14:07:11.807575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:14.650 [2024-11-20 14:07:11.807614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:14.650 [2024-11-20 14:07:11.807628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.650 [2024-11-20 14:07:11.916552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:14.650 [2024-11-20 14:07:11.916607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:14.650 [2024-11-20 14:07:11.916630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:14.650 [2024-11-20 14:07:11.916641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.650 [2024-11-20 14:07:11.916754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:14.650 [2024-11-20 14:07:11.916767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:14.650 [2024-11-20 14:07:11.916783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:14.650 [2024-11-20 14:07:11.916793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.650 [2024-11-20 14:07:11.916824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:14.650 [2024-11-20 14:07:11.916835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:14.650 [2024-11-20 14:07:11.916848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:14.650 [2024-11-20 14:07:11.916859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.650 [2024-11-20 14:07:11.916980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:14.650 [2024-11-20 14:07:11.916998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:14.650 [2024-11-20 14:07:11.917012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:14.650 [2024-11-20 14:07:11.917022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.650 [2024-11-20 14:07:11.917064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:14.650 [2024-11-20 14:07:11.917076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:14.650 [2024-11-20 14:07:11.917089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:14.650 [2024-11-20 14:07:11.917099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.650 [2024-11-20 14:07:11.917143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:14.650 [2024-11-20 14:07:11.917155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:14.650 [2024-11-20 14:07:11.917170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:14.650 [2024-11-20 14:07:11.917180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.650 [2024-11-20 14:07:11.917225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:14.650 [2024-11-20 14:07:11.917236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:14.650 [2024-11-20 14:07:11.917249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:14.650 [2024-11-20 14:07:11.917259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.650 [2024-11-20 14:07:11.917401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 456.408 ms, result 0 00:46:16.030 14:07:12 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:46:16.030 14:07:12 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:46:16.030 [2024-11-20 14:07:13.112811] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:46:16.030 [2024-11-20 14:07:13.112986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79147 ] 00:46:16.030 [2024-11-20 14:07:13.307624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:16.288 [2024-11-20 14:07:13.429150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:16.546 [2024-11-20 14:07:13.807254] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:16.546 [2024-11-20 14:07:13.807339] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:16.805 [2024-11-20 14:07:13.971212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.806 [2024-11-20 14:07:13.971280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:16.806 [2024-11-20 14:07:13.971297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:46:16.806 [2024-11-20 14:07:13.971308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:16.806 [2024-11-20 14:07:13.974881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.806 [2024-11-20 14:07:13.974934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:16.806 [2024-11-20 14:07:13.974949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.548 ms 00:46:16.806 [2024-11-20 14:07:13.974961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:16.806 [2024-11-20 14:07:13.975102] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:16.806 [2024-11-20 14:07:13.976224] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:16.806 [2024-11-20 14:07:13.976263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.806 [2024-11-20 14:07:13.976276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:16.806 [2024-11-20 14:07:13.976289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:46:16.806 [2024-11-20 14:07:13.976300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:16.806 [2024-11-20 14:07:13.978223] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:46:16.806 [2024-11-20 14:07:14.000395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.806 [2024-11-20 14:07:14.000704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:46:16.806 [2024-11-20 14:07:14.000732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.169 ms 00:46:16.806 [2024-11-20 14:07:14.000745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:16.806 [2024-11-20 14:07:14.000920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.806 [2024-11-20 14:07:14.000936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:46:16.806 [2024-11-20 14:07:14.000949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:46:16.806 [2024-11-20 14:07:14.000960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:16.806 [2024-11-20 14:07:14.008600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.806 [2024-11-20 14:07:14.008903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:16.806 [2024-11-20 14:07:14.008927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.585 ms 00:46:16.806 [2024-11-20 14:07:14.008939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:16.806 [2024-11-20 14:07:14.009076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.806 [2024-11-20 14:07:14.009090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:16.806 [2024-11-20 14:07:14.009101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:46:16.806 [2024-11-20 14:07:14.009112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:16.806 [2024-11-20 14:07:14.009144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.806 [2024-11-20 14:07:14.009160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:16.806 [2024-11-20 14:07:14.009170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:46:16.806 [2024-11-20 14:07:14.009180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:16.806 [2024-11-20 14:07:14.009206] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:46:16.806 [2024-11-20 14:07:14.014433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.806 [2024-11-20 14:07:14.014497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:16.806 [2024-11-20 14:07:14.014513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.234 ms 00:46:16.806 [2024-11-20 14:07:14.014523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:16.806 [2024-11-20 14:07:14.014622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.806 [2024-11-20 14:07:14.014636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:16.806 [2024-11-20 14:07:14.014648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:46:16.806 [2024-11-20 14:07:14.014659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:16.806 [2024-11-20 14:07:14.014683] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:46:16.806 [2024-11-20 14:07:14.014711] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:46:16.806 [2024-11-20 14:07:14.014747] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:46:16.806 [2024-11-20 14:07:14.014767] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:46:16.806 [2024-11-20 14:07:14.014859] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:46:16.806 [2024-11-20 14:07:14.014873] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:16.806 [2024-11-20 14:07:14.014886] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:46:16.806 [2024-11-20 14:07:14.014899] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:16.806 [2024-11-20 14:07:14.014915] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:16.806 [2024-11-20 14:07:14.014927] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:46:16.806 [2024-11-20 14:07:14.014937] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:16.806 [2024-11-20 14:07:14.014946] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:46:16.806 [2024-11-20 14:07:14.014957] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:46:16.806 [2024-11-20 14:07:14.014967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.806 [2024-11-20 14:07:14.014977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:16.806 [2024-11-20 14:07:14.014988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:46:16.806 [2024-11-20 14:07:14.015015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:16.806 [2024-11-20 14:07:14.015100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.806 [2024-11-20 14:07:14.015116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:16.806 [2024-11-20 14:07:14.015128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:46:16.806 [2024-11-20 14:07:14.015139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:16.806 [2024-11-20 14:07:14.015240] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:16.806 [2024-11-20 14:07:14.015253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:16.806 [2024-11-20 14:07:14.015265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:16.806 [2024-11-20 14:07:14.015276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:16.806 [2024-11-20 14:07:14.015287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:16.806 [2024-11-20 14:07:14.015297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:16.806 [2024-11-20 14:07:14.015307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:46:16.806 [2024-11-20 14:07:14.015317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:16.806 [2024-11-20 14:07:14.015328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:46:16.806 [2024-11-20 14:07:14.015338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:16.806 [2024-11-20 14:07:14.015348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:16.806 [2024-11-20 14:07:14.015358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:46:16.806 [2024-11-20 14:07:14.015369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:16.806 [2024-11-20 14:07:14.015393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:16.806 [2024-11-20 14:07:14.015403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:46:16.806 [2024-11-20 14:07:14.015413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:16.806 [2024-11-20 14:07:14.015423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:16.806 [2024-11-20 14:07:14.015434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:46:16.806 [2024-11-20 14:07:14.015444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:16.806 [2024-11-20 14:07:14.015455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:16.806 [2024-11-20 14:07:14.015465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:46:16.806 [2024-11-20 14:07:14.015475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:16.806 [2024-11-20 14:07:14.015485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:16.806 [2024-11-20 14:07:14.015507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:46:16.806 [2024-11-20 14:07:14.015518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:16.806 [2024-11-20 14:07:14.015528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:16.806 [2024-11-20 14:07:14.015538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:46:16.806 [2024-11-20 14:07:14.015548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:16.806 [2024-11-20 14:07:14.015558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:16.806 [2024-11-20 14:07:14.015568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:46:16.806 [2024-11-20 14:07:14.015578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:16.806 [2024-11-20 14:07:14.015588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:16.806 [2024-11-20 14:07:14.015598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:46:16.806 [2024-11-20 14:07:14.015608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:16.806 [2024-11-20 14:07:14.015619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:16.806 [2024-11-20 14:07:14.015629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:46:16.806 [2024-11-20 14:07:14.015639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:16.806 [2024-11-20 14:07:14.015649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:46:16.807 [2024-11-20 14:07:14.015658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:46:16.807 [2024-11-20 14:07:14.015668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:16.807 [2024-11-20 14:07:14.015678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:46:16.807 [2024-11-20 14:07:14.015688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:46:16.807 [2024-11-20 14:07:14.015698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:16.807 [2024-11-20 14:07:14.015707] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:16.807 [2024-11-20 14:07:14.015720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:16.807 [2024-11-20 14:07:14.015731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:16.807 [2024-11-20 14:07:14.015746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:16.807 [2024-11-20 14:07:14.015766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:16.807 [2024-11-20 14:07:14.015776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:16.807 [2024-11-20 14:07:14.015787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:16.807 [2024-11-20 14:07:14.015797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:16.807 [2024-11-20 14:07:14.015807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:16.807 [2024-11-20 14:07:14.015818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:16.807 [2024-11-20 14:07:14.015829] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:16.807 [2024-11-20 14:07:14.015842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:16.807 [2024-11-20 14:07:14.015855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:46:16.807 [2024-11-20 14:07:14.015867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:46:16.807 [2024-11-20 14:07:14.015878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:46:16.807 [2024-11-20 14:07:14.015889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:46:16.807 [2024-11-20 14:07:14.015901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:46:16.807 [2024-11-20 14:07:14.015912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:46:16.807 [2024-11-20 14:07:14.015923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:46:16.807 [2024-11-20 14:07:14.015934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:46:16.807 [2024-11-20 14:07:14.015945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:46:16.807 [2024-11-20 14:07:14.015957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:46:16.807 [2024-11-20 14:07:14.015968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:46:16.807 [2024-11-20 14:07:14.015979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:46:16.807 [2024-11-20 14:07:14.015990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:46:16.807 [2024-11-20 14:07:14.016001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:46:16.807 [2024-11-20 14:07:14.016012] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:16.807 [2024-11-20 14:07:14.016024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:16.807 [2024-11-20 14:07:14.016036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:16.807 [2024-11-20 14:07:14.016047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:16.807 [2024-11-20 14:07:14.016058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:16.807 [2024-11-20 14:07:14.016069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:16.807 [2024-11-20 14:07:14.016081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.807 [2024-11-20 14:07:14.016093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:16.807 [2024-11-20 14:07:14.016114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:46:16.807 [2024-11-20 14:07:14.016125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:16.807 [2024-11-20 14:07:14.057215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.807 [2024-11-20 14:07:14.057273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:16.807 [2024-11-20 14:07:14.057289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.024 ms 00:46:16.807 [2024-11-20 14:07:14.057300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:16.807 [2024-11-20 14:07:14.057469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:16.807 [2024-11-20 14:07:14.057504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:16.807 [2024-11-20 14:07:14.057516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:46:16.807 [2024-11-20 14:07:14.057528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.066 [2024-11-20 14:07:14.148662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.066 [2024-11-20 14:07:14.148987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:17.066 [2024-11-20 14:07:14.149026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.102 ms 00:46:17.066 [2024-11-20 14:07:14.149053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.066 [2024-11-20 14:07:14.149254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.066 [2024-11-20 14:07:14.149276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:17.066 [2024-11-20 14:07:14.149294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:46:17.066 [2024-11-20 14:07:14.149309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.066 [2024-11-20 14:07:14.149871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.066 [2024-11-20 14:07:14.149895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:17.066 [2024-11-20 14:07:14.149913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:46:17.066 [2024-11-20 14:07:14.149939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.066 [2024-11-20 14:07:14.150120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.066 [2024-11-20 14:07:14.150150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:17.066 [2024-11-20 14:07:14.150167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:46:17.066 [2024-11-20 14:07:14.150182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.066 [2024-11-20 14:07:14.180272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.066 [2024-11-20 14:07:14.180606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:17.066 [2024-11-20 14:07:14.180646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.052 ms 00:46:17.066 [2024-11-20 14:07:14.180663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.066 [2024-11-20 14:07:14.212087] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:46:17.066 [2024-11-20 14:07:14.212412] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:46:17.066 [2024-11-20 14:07:14.212446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.066 [2024-11-20 14:07:14.212464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:46:17.066 [2024-11-20 14:07:14.212506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.567 ms 00:46:17.066 [2024-11-20 14:07:14.212523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.066 [2024-11-20 14:07:14.262978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.066 [2024-11-20 14:07:14.263343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:46:17.066 [2024-11-20 14:07:14.263396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.206 ms 00:46:17.066 [2024-11-20 14:07:14.263414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.066 [2024-11-20 14:07:14.295248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.066 [2024-11-20 14:07:14.295333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:46:17.066 [2024-11-20 14:07:14.295355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.597 ms 00:46:17.066 [2024-11-20 14:07:14.295372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.066 [2024-11-20 14:07:14.326819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.066 [2024-11-20 14:07:14.326910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:46:17.066 [2024-11-20 14:07:14.326934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.241 ms 00:46:17.066 [2024-11-20 14:07:14.326951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.066 [2024-11-20 14:07:14.328393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.066 [2024-11-20 14:07:14.328441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:17.066 [2024-11-20 14:07:14.328461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.205 ms 00:46:17.066 [2024-11-20 14:07:14.328490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.324 [2024-11-20 14:07:14.441798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.324 [2024-11-20 14:07:14.442077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:46:17.324 [2024-11-20 14:07:14.442106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.259 ms 00:46:17.324 [2024-11-20 14:07:14.442119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.324 [2024-11-20 14:07:14.456854] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:46:17.324 [2024-11-20 14:07:14.474978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.324 [2024-11-20 14:07:14.475046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:17.324 [2024-11-20 14:07:14.475064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.676 ms 00:46:17.324 [2024-11-20 14:07:14.475100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.324 [2024-11-20 14:07:14.475259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.324 [2024-11-20 14:07:14.475274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:46:17.324 [2024-11-20 14:07:14.475287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:46:17.324 [2024-11-20 14:07:14.475298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.324 [2024-11-20 14:07:14.475360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.324 [2024-11-20 14:07:14.475373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:17.324 [2024-11-20 14:07:14.475385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:46:17.324 [2024-11-20 14:07:14.475395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.324 [2024-11-20 14:07:14.475441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.324 [2024-11-20 14:07:14.475456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:17.324 [2024-11-20 14:07:14.475467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:46:17.324 [2024-11-20 14:07:14.475478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.324 [2024-11-20 14:07:14.475550] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:46:17.324 [2024-11-20 14:07:14.475566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.325 [2024-11-20 14:07:14.475577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:46:17.325 [2024-11-20 14:07:14.475588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:46:17.325 [2024-11-20 14:07:14.475599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.325 [2024-11-20 14:07:14.516787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.325 [2024-11-20 14:07:14.516863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:17.325 [2024-11-20 14:07:14.516881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.158 ms 00:46:17.325 [2024-11-20 14:07:14.516893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.325 [2024-11-20 14:07:14.517096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:17.325 [2024-11-20 14:07:14.517111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:17.325 [2024-11-20 14:07:14.517123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:46:17.325 [2024-11-20 14:07:14.517133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:17.325 [2024-11-20 14:07:14.518224] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:17.325 [2024-11-20 14:07:14.523660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 546.638 ms, result 0 00:46:17.325 [2024-11-20 14:07:14.524736] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:17.325 [2024-11-20 14:07:14.545064] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:18.258  [2024-11-20T14:07:16.955Z] Copying: 29/256 [MB] (29 MBps) [2024-11-20T14:07:17.564Z] Copying: 57/256 [MB] (27 MBps) [2024-11-20T14:07:18.939Z] Copying: 84/256 [MB] (27 MBps) [2024-11-20T14:07:19.874Z] Copying: 111/256 [MB] (26 MBps) [2024-11-20T14:07:20.809Z] Copying: 138/256 [MB] (27 MBps) [2024-11-20T14:07:21.745Z] Copying: 165/256 [MB] (26 MBps) [2024-11-20T14:07:22.694Z] Copying: 191/256 [MB] (26 MBps) [2024-11-20T14:07:23.637Z] Copying: 217/256 [MB] (26 MBps) [2024-11-20T14:07:24.205Z] Copying: 244/256 [MB] (26 MBps) [2024-11-20T14:07:24.205Z] Copying: 256/256 [MB] (average 27 MBps)[2024-11-20 14:07:23.982916] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:26.882 [2024-11-20 14:07:24.000101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:26.882 [2024-11-20 14:07:24.000161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:26.882 [2024-11-20 14:07:24.000181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:46:26.882 [2024-11-20 14:07:24.000207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:26.882 [2024-11-20 14:07:24.000239] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:46:26.882 [2024-11-20 14:07:24.005151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:26.882 [2024-11-20 14:07:24.005193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:26.882 [2024-11-20 14:07:24.005210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.889 ms 00:46:26.882 [2024-11-20 14:07:24.005222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:26.882 [2024-11-20 14:07:24.005522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:26.882 [2024-11-20 14:07:24.005539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:26.882 [2024-11-20 14:07:24.005554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:46:26.882 [2024-11-20 14:07:24.005566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:26.882 [2024-11-20 14:07:24.008908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:26.882 [2024-11-20 14:07:24.008951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:26.882 [2024-11-20 14:07:24.008964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.321 ms 00:46:26.882 [2024-11-20 14:07:24.008976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:26.882 [2024-11-20 14:07:24.015258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:26.882 [2024-11-20 14:07:24.015306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:46:26.882 [2024-11-20 14:07:24.015319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.256 ms 00:46:26.882 [2024-11-20 14:07:24.015331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:26.882 [2024-11-20 14:07:24.057143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:26.882 [2024-11-20 14:07:24.057400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:26.882 [2024-11-20 14:07:24.057427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.721 ms 00:46:26.882 [2024-11-20 14:07:24.057439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:26.882 [2024-11-20 14:07:24.081864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:26.882 [2024-11-20 14:07:24.081938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:26.882 [2024-11-20 14:07:24.081960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.319 ms 00:46:26.882 [2024-11-20 14:07:24.081972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:26.882 [2024-11-20 14:07:24.082179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:26.882 [2024-11-20 14:07:24.082194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:26.882 [2024-11-20 14:07:24.082206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:46:26.882 [2024-11-20 14:07:24.082216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:26.882 [2024-11-20 14:07:24.123644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:26.882 [2024-11-20 14:07:24.123933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:46:26.882 [2024-11-20 14:07:24.123962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.391 ms 00:46:26.882 [2024-11-20 14:07:24.123974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:26.882 [2024-11-20 14:07:24.166353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:26.882 [2024-11-20 14:07:24.166430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:46:26.882 [2024-11-20 14:07:24.166449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.288 ms 00:46:26.882 [2024-11-20 14:07:24.166461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.142 [2024-11-20 14:07:24.211300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:27.142 [2024-11-20 14:07:24.211362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:27.142 [2024-11-20 14:07:24.211380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.721 ms 00:46:27.142 [2024-11-20 14:07:24.211392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.142 [2024-11-20 14:07:24.255671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:27.142 [2024-11-20 14:07:24.255971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:27.142 [2024-11-20 14:07:24.256000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.141 ms 00:46:27.142 [2024-11-20 14:07:24.256012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.142 [2024-11-20 14:07:24.256102] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:27.142 [2024-11-20 14:07:24.256124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:27.142 [2024-11-20 14:07:24.256554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.256999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:27.143 [2024-11-20 14:07:24.257423] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:27.143 [2024-11-20 14:07:24.257435] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1f0a796-101e-4f30-a82e-71441876703e 00:46:27.143 [2024-11-20 14:07:24.257448] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:46:27.143 [2024-11-20 14:07:24.257460] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:46:27.143 [2024-11-20 14:07:24.257471] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:46:27.143 [2024-11-20 14:07:24.257494] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:46:27.143 [2024-11-20 14:07:24.257506] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:27.143 [2024-11-20 14:07:24.257519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:27.143 [2024-11-20 14:07:24.257530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:27.143 [2024-11-20 14:07:24.257541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:27.143 [2024-11-20 14:07:24.257552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:27.143 [2024-11-20 14:07:24.257563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:27.143 [2024-11-20 14:07:24.257581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:27.143 [2024-11-20 14:07:24.257593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.462 ms 00:46:27.143 [2024-11-20 14:07:24.257605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.143 [2024-11-20 14:07:24.280471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:27.143 [2024-11-20 14:07:24.280568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:27.143 [2024-11-20 14:07:24.280586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.830 ms 00:46:27.143 [2024-11-20 14:07:24.280598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.143 [2024-11-20 14:07:24.281321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:27.143 [2024-11-20 14:07:24.281343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:27.143 [2024-11-20 14:07:24.281356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:46:27.143 [2024-11-20 14:07:24.281368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.143 [2024-11-20 14:07:24.342882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:27.143 [2024-11-20 14:07:24.343156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:27.143 [2024-11-20 14:07:24.343184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:27.143 [2024-11-20 14:07:24.343196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.144 [2024-11-20 14:07:24.343380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:27.144 [2024-11-20 14:07:24.343395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:27.144 [2024-11-20 14:07:24.343408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:27.144 [2024-11-20 14:07:24.343420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.144 [2024-11-20 14:07:24.343519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:27.144 [2024-11-20 14:07:24.343536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:27.144 [2024-11-20 14:07:24.343549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:27.144 [2024-11-20 14:07:24.343561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.144 [2024-11-20 14:07:24.343584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:27.144 [2024-11-20 14:07:24.343607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:27.144 [2024-11-20 14:07:24.343619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:27.144 [2024-11-20 14:07:24.343630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.402 [2024-11-20 14:07:24.485001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:27.402 [2024-11-20 14:07:24.485083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:27.402 [2024-11-20 14:07:24.485100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:27.402 [2024-11-20 14:07:24.485112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.402 [2024-11-20 14:07:24.602000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:27.402 [2024-11-20 14:07:24.602073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:27.402 [2024-11-20 14:07:24.602090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:27.402 [2024-11-20 14:07:24.602102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.402 [2024-11-20 14:07:24.602238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:27.402 [2024-11-20 14:07:24.602253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:27.402 [2024-11-20 14:07:24.602265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:27.402 [2024-11-20 14:07:24.602276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.402 [2024-11-20 14:07:24.602308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:27.402 [2024-11-20 14:07:24.602320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:27.402 [2024-11-20 14:07:24.602339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:27.402 [2024-11-20 14:07:24.602351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.402 [2024-11-20 14:07:24.602477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:27.402 [2024-11-20 14:07:24.602525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:27.402 [2024-11-20 14:07:24.602537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:27.402 [2024-11-20 14:07:24.602564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.402 [2024-11-20 14:07:24.602626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:27.402 [2024-11-20 14:07:24.602641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:27.402 [2024-11-20 14:07:24.602653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:27.403 [2024-11-20 14:07:24.602674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.403 [2024-11-20 14:07:24.602719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:27.403 [2024-11-20 14:07:24.602733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:27.403 [2024-11-20 14:07:24.602744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:27.403 [2024-11-20 14:07:24.602756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.403 [2024-11-20 14:07:24.602804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:27.403 [2024-11-20 14:07:24.602818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:27.403 [2024-11-20 14:07:24.602838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:27.403 [2024-11-20 14:07:24.602849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:27.403 [2024-11-20 14:07:24.603016] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 602.920 ms, result 0 00:46:28.779 00:46:28.779 00:46:28.779 14:07:25 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:46:28.779 14:07:25 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:46:29.039 14:07:26 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:46:29.039 [2024-11-20 14:07:26.286184] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:46:29.039 [2024-11-20 14:07:26.286321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79281 ] 00:46:29.298 [2024-11-20 14:07:26.462315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:29.298 [2024-11-20 14:07:26.595271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:29.866 [2024-11-20 14:07:27.001281] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:29.866 [2024-11-20 14:07:27.001369] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:29.866 [2024-11-20 14:07:27.167084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:29.866 [2024-11-20 14:07:27.167152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:29.866 [2024-11-20 14:07:27.167169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:46:29.866 [2024-11-20 14:07:27.167180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:29.866 [2024-11-20 14:07:27.170474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:29.866 [2024-11-20 14:07:27.170531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:29.866 [2024-11-20 14:07:27.170545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.270 ms 00:46:29.866 [2024-11-20 14:07:27.170556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:29.866 [2024-11-20 14:07:27.170701] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:29.866 [2024-11-20 14:07:27.171887] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:29.866 [2024-11-20 14:07:27.171918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:29.866 [2024-11-20 14:07:27.171929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:29.866 [2024-11-20 14:07:27.171942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:46:29.866 [2024-11-20 14:07:27.171954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:29.866 [2024-11-20 14:07:27.173556] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:46:30.127 [2024-11-20 14:07:27.195277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.127 [2024-11-20 14:07:27.195358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:46:30.127 [2024-11-20 14:07:27.195376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.719 ms 00:46:30.127 [2024-11-20 14:07:27.195388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.127 [2024-11-20 14:07:27.195595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.127 [2024-11-20 14:07:27.195614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:46:30.127 [2024-11-20 14:07:27.195626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:46:30.127 [2024-11-20 14:07:27.195638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.127 [2024-11-20 14:07:27.203402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.127 [2024-11-20 14:07:27.203448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:30.127 [2024-11-20 14:07:27.203461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.709 ms 00:46:30.127 [2024-11-20 14:07:27.203472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.127 [2024-11-20 14:07:27.203636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.127 [2024-11-20 14:07:27.203655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:30.127 [2024-11-20 14:07:27.203668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:46:30.127 [2024-11-20 14:07:27.203679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.127 [2024-11-20 14:07:27.203715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.127 [2024-11-20 14:07:27.203732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:30.127 [2024-11-20 14:07:27.203753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:46:30.127 [2024-11-20 14:07:27.203765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.127 [2024-11-20 14:07:27.203794] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:46:30.127 [2024-11-20 14:07:27.209550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.127 [2024-11-20 14:07:27.209596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:30.127 [2024-11-20 14:07:27.209611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.764 ms 00:46:30.127 [2024-11-20 14:07:27.209623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.127 [2024-11-20 14:07:27.209723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.127 [2024-11-20 14:07:27.209737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:30.127 [2024-11-20 14:07:27.209750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:46:30.127 [2024-11-20 14:07:27.209761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.127 [2024-11-20 14:07:27.209787] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:46:30.127 [2024-11-20 14:07:27.209818] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:46:30.127 [2024-11-20 14:07:27.209858] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:46:30.127 [2024-11-20 14:07:27.209879] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:46:30.127 [2024-11-20 14:07:27.209980] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:46:30.127 [2024-11-20 14:07:27.209995] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:30.127 [2024-11-20 14:07:27.210010] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:46:30.127 [2024-11-20 14:07:27.210024] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:30.127 [2024-11-20 14:07:27.210042] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:30.127 [2024-11-20 14:07:27.210054] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:46:30.127 [2024-11-20 14:07:27.210065] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:30.127 [2024-11-20 14:07:27.210076] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:46:30.127 [2024-11-20 14:07:27.210088] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:46:30.127 [2024-11-20 14:07:27.210099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.127 [2024-11-20 14:07:27.210110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:30.127 [2024-11-20 14:07:27.210121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:46:30.127 [2024-11-20 14:07:27.210132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.127 [2024-11-20 14:07:27.210217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.127 [2024-11-20 14:07:27.210233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:30.127 [2024-11-20 14:07:27.210244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:46:30.127 [2024-11-20 14:07:27.210256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.127 [2024-11-20 14:07:27.210360] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:30.127 [2024-11-20 14:07:27.210374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:30.127 [2024-11-20 14:07:27.210385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:30.127 [2024-11-20 14:07:27.210397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:30.127 [2024-11-20 14:07:27.210409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:30.127 [2024-11-20 14:07:27.210419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:30.127 [2024-11-20 14:07:27.210429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:46:30.127 [2024-11-20 14:07:27.210441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:30.127 [2024-11-20 14:07:27.210452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:46:30.127 [2024-11-20 14:07:27.210462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:30.127 [2024-11-20 14:07:27.210472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:30.127 [2024-11-20 14:07:27.210502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:46:30.127 [2024-11-20 14:07:27.210513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:30.127 [2024-11-20 14:07:27.210537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:30.127 [2024-11-20 14:07:27.210548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:46:30.127 [2024-11-20 14:07:27.210558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:30.127 [2024-11-20 14:07:27.210569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:30.127 [2024-11-20 14:07:27.210581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:46:30.127 [2024-11-20 14:07:27.210591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:30.127 [2024-11-20 14:07:27.210602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:30.127 [2024-11-20 14:07:27.210613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:46:30.127 [2024-11-20 14:07:27.210624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:30.127 [2024-11-20 14:07:27.210637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:30.127 [2024-11-20 14:07:27.210647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:46:30.127 [2024-11-20 14:07:27.210657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:30.127 [2024-11-20 14:07:27.210667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:30.127 [2024-11-20 14:07:27.210677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:46:30.127 [2024-11-20 14:07:27.210687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:30.128 [2024-11-20 14:07:27.210697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:30.128 [2024-11-20 14:07:27.210707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:46:30.128 [2024-11-20 14:07:27.210717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:30.128 [2024-11-20 14:07:27.210727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:30.128 [2024-11-20 14:07:27.210737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:46:30.128 [2024-11-20 14:07:27.210747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:30.128 [2024-11-20 14:07:27.210756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:30.128 [2024-11-20 14:07:27.210766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:46:30.128 [2024-11-20 14:07:27.210776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:30.128 [2024-11-20 14:07:27.210786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:46:30.128 [2024-11-20 14:07:27.210797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:46:30.128 [2024-11-20 14:07:27.210806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:30.128 [2024-11-20 14:07:27.210816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:46:30.128 [2024-11-20 14:07:27.210826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:46:30.128 [2024-11-20 14:07:27.210835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:30.128 [2024-11-20 14:07:27.210845] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:30.128 [2024-11-20 14:07:27.210857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:30.128 [2024-11-20 14:07:27.210881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:30.128 [2024-11-20 14:07:27.210895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:30.128 [2024-11-20 14:07:27.210905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:30.128 [2024-11-20 14:07:27.210914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:30.128 [2024-11-20 14:07:27.210925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:30.128 [2024-11-20 14:07:27.210934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:30.128 [2024-11-20 14:07:27.210943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:30.128 [2024-11-20 14:07:27.210953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:30.128 [2024-11-20 14:07:27.210964] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:30.128 [2024-11-20 14:07:27.210977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:30.128 [2024-11-20 14:07:27.210988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:46:30.128 [2024-11-20 14:07:27.210999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:46:30.128 [2024-11-20 14:07:27.211009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:46:30.128 [2024-11-20 14:07:27.211020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:46:30.128 [2024-11-20 14:07:27.211030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:46:30.128 [2024-11-20 14:07:27.211040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:46:30.128 [2024-11-20 14:07:27.211067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:46:30.128 [2024-11-20 14:07:27.211079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:46:30.128 [2024-11-20 14:07:27.211089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:46:30.128 [2024-11-20 14:07:27.211101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:46:30.128 [2024-11-20 14:07:27.211112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:46:30.128 [2024-11-20 14:07:27.211122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:46:30.128 [2024-11-20 14:07:27.211133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:46:30.128 [2024-11-20 14:07:27.211144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:46:30.128 [2024-11-20 14:07:27.211156] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:30.128 [2024-11-20 14:07:27.211169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:30.128 [2024-11-20 14:07:27.211181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:30.128 [2024-11-20 14:07:27.211192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:30.128 [2024-11-20 14:07:27.211203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:30.128 [2024-11-20 14:07:27.211214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:30.128 [2024-11-20 14:07:27.211226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.128 [2024-11-20 14:07:27.211237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:30.128 [2024-11-20 14:07:27.211253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:46:30.128 [2024-11-20 14:07:27.211264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.128 [2024-11-20 14:07:27.252390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.128 [2024-11-20 14:07:27.252454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:30.128 [2024-11-20 14:07:27.252471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.058 ms 00:46:30.128 [2024-11-20 14:07:27.252500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.128 [2024-11-20 14:07:27.252710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.128 [2024-11-20 14:07:27.252732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:30.128 [2024-11-20 14:07:27.252745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:46:30.128 [2024-11-20 14:07:27.252756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.128 [2024-11-20 14:07:27.310148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.128 [2024-11-20 14:07:27.310210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:30.128 [2024-11-20 14:07:27.310226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.361 ms 00:46:30.128 [2024-11-20 14:07:27.310241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.128 [2024-11-20 14:07:27.310388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.128 [2024-11-20 14:07:27.310401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:30.128 [2024-11-20 14:07:27.310412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:46:30.128 [2024-11-20 14:07:27.310422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.128 [2024-11-20 14:07:27.310932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.128 [2024-11-20 14:07:27.310948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:30.128 [2024-11-20 14:07:27.310961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:46:30.128 [2024-11-20 14:07:27.310979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.128 [2024-11-20 14:07:27.311111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.128 [2024-11-20 14:07:27.311126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:30.128 [2024-11-20 14:07:27.311138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:46:30.128 [2024-11-20 14:07:27.311150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.128 [2024-11-20 14:07:27.332273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.128 [2024-11-20 14:07:27.332337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:30.128 [2024-11-20 14:07:27.332353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.095 ms 00:46:30.128 [2024-11-20 14:07:27.332364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.128 [2024-11-20 14:07:27.353385] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:46:30.128 [2024-11-20 14:07:27.353460] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:46:30.128 [2024-11-20 14:07:27.353494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.128 [2024-11-20 14:07:27.353507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:46:30.128 [2024-11-20 14:07:27.353521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.938 ms 00:46:30.128 [2024-11-20 14:07:27.353532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.128 [2024-11-20 14:07:27.386927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.128 [2024-11-20 14:07:27.387036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:46:30.128 [2024-11-20 14:07:27.387054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.232 ms 00:46:30.128 [2024-11-20 14:07:27.387065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.128 [2024-11-20 14:07:27.408293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.128 [2024-11-20 14:07:27.408599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:46:30.128 [2024-11-20 14:07:27.408626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.037 ms 00:46:30.128 [2024-11-20 14:07:27.408638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.128 [2024-11-20 14:07:27.429642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.128 [2024-11-20 14:07:27.429716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:46:30.128 [2024-11-20 14:07:27.429734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.862 ms 00:46:30.128 [2024-11-20 14:07:27.429745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.128 [2024-11-20 14:07:27.430650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.128 [2024-11-20 14:07:27.430675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:30.128 [2024-11-20 14:07:27.430687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:46:30.128 [2024-11-20 14:07:27.430698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.388 [2024-11-20 14:07:27.526839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.388 [2024-11-20 14:07:27.526938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:46:30.388 [2024-11-20 14:07:27.526960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.087 ms 00:46:30.388 [2024-11-20 14:07:27.526972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.388 [2024-11-20 14:07:27.541199] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:46:30.388 [2024-11-20 14:07:27.559794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.388 [2024-11-20 14:07:27.559863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:30.388 [2024-11-20 14:07:27.559885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.607 ms 00:46:30.388 [2024-11-20 14:07:27.559904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.388 [2024-11-20 14:07:27.560043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.388 [2024-11-20 14:07:27.560062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:46:30.388 [2024-11-20 14:07:27.560075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:46:30.388 [2024-11-20 14:07:27.560092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.388 [2024-11-20 14:07:27.560156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.388 [2024-11-20 14:07:27.560169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:30.388 [2024-11-20 14:07:27.560184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:46:30.388 [2024-11-20 14:07:27.560195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.388 [2024-11-20 14:07:27.560242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.388 [2024-11-20 14:07:27.560258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:30.388 [2024-11-20 14:07:27.560270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:46:30.388 [2024-11-20 14:07:27.560281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.388 [2024-11-20 14:07:27.560318] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:46:30.388 [2024-11-20 14:07:27.560336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.388 [2024-11-20 14:07:27.560347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:46:30.388 [2024-11-20 14:07:27.560359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:46:30.388 [2024-11-20 14:07:27.560369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.388 [2024-11-20 14:07:27.603436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.388 [2024-11-20 14:07:27.603546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:30.388 [2024-11-20 14:07:27.603567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.034 ms 00:46:30.388 [2024-11-20 14:07:27.603579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.388 [2024-11-20 14:07:27.603853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.388 [2024-11-20 14:07:27.603883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:30.388 [2024-11-20 14:07:27.603903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:46:30.388 [2024-11-20 14:07:27.603922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.388 [2024-11-20 14:07:27.605155] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:30.388 [2024-11-20 14:07:27.611128] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 437.683 ms, result 0 00:46:30.388 [2024-11-20 14:07:27.612073] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:30.388 [2024-11-20 14:07:27.633484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:30.647  [2024-11-20T14:07:27.970Z] Copying: 4096/4096 [kB] (average 27 MBps)[2024-11-20 14:07:27.787034] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:30.647 [2024-11-20 14:07:27.803406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.647 [2024-11-20 14:07:27.803685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:30.647 [2024-11-20 14:07:27.803716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:46:30.647 [2024-11-20 14:07:27.803738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.647 [2024-11-20 14:07:27.803796] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:46:30.647 [2024-11-20 14:07:27.808490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.647 [2024-11-20 14:07:27.808525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:30.647 [2024-11-20 14:07:27.808540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.673 ms 00:46:30.647 [2024-11-20 14:07:27.808564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.647 [2024-11-20 14:07:27.810664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.648 [2024-11-20 14:07:27.810818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:30.648 [2024-11-20 14:07:27.810842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 00:46:30.648 [2024-11-20 14:07:27.810855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.648 [2024-11-20 14:07:27.814416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.648 [2024-11-20 14:07:27.814585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:30.648 [2024-11-20 14:07:27.814607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.532 ms 00:46:30.648 [2024-11-20 14:07:27.814619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.648 [2024-11-20 14:07:27.820918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.648 [2024-11-20 14:07:27.820949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:46:30.648 [2024-11-20 14:07:27.820962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.258 ms 00:46:30.648 [2024-11-20 14:07:27.820972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.648 [2024-11-20 14:07:27.861888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.648 [2024-11-20 14:07:27.861956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:30.648 [2024-11-20 14:07:27.861973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.855 ms 00:46:30.648 [2024-11-20 14:07:27.861983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.648 [2024-11-20 14:07:27.885780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.648 [2024-11-20 14:07:27.885880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:30.648 [2024-11-20 14:07:27.885903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.696 ms 00:46:30.648 [2024-11-20 14:07:27.885915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.648 [2024-11-20 14:07:27.886091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.648 [2024-11-20 14:07:27.886105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:30.648 [2024-11-20 14:07:27.886116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:46:30.648 [2024-11-20 14:07:27.886126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.648 [2024-11-20 14:07:27.925895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.648 [2024-11-20 14:07:27.925957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:46:30.648 [2024-11-20 14:07:27.925973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.734 ms 00:46:30.648 [2024-11-20 14:07:27.925983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.648 [2024-11-20 14:07:27.963541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.648 [2024-11-20 14:07:27.963592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:46:30.648 [2024-11-20 14:07:27.963608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.484 ms 00:46:30.648 [2024-11-20 14:07:27.963618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.908 [2024-11-20 14:07:27.999483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.908 [2024-11-20 14:07:27.999527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:30.908 [2024-11-20 14:07:27.999541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.795 ms 00:46:30.908 [2024-11-20 14:07:27.999567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.908 [2024-11-20 14:07:28.035107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.908 [2024-11-20 14:07:28.035288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:30.908 [2024-11-20 14:07:28.035309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.452 ms 00:46:30.908 [2024-11-20 14:07:28.035320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.908 [2024-11-20 14:07:28.035425] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:30.908 [2024-11-20 14:07:28.035462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:30.908 [2024-11-20 14:07:28.035958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.035969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.035979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.035990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:30.909 [2024-11-20 14:07:28.036606] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:30.909 [2024-11-20 14:07:28.036616] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1f0a796-101e-4f30-a82e-71441876703e 00:46:30.909 [2024-11-20 14:07:28.036627] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:46:30.909 [2024-11-20 14:07:28.036637] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:46:30.909 [2024-11-20 14:07:28.036647] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:46:30.909 [2024-11-20 14:07:28.036657] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:46:30.909 [2024-11-20 14:07:28.036667] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:30.909 [2024-11-20 14:07:28.036678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:30.909 [2024-11-20 14:07:28.036700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:30.909 [2024-11-20 14:07:28.036709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:30.909 [2024-11-20 14:07:28.036718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:30.909 [2024-11-20 14:07:28.036728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.909 [2024-11-20 14:07:28.036742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:30.910 [2024-11-20 14:07:28.036754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:46:30.910 [2024-11-20 14:07:28.036764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.910 [2024-11-20 14:07:28.057085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.910 [2024-11-20 14:07:28.057120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:30.910 [2024-11-20 14:07:28.057160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.297 ms 00:46:30.910 [2024-11-20 14:07:28.057170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.910 [2024-11-20 14:07:28.057746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:30.910 [2024-11-20 14:07:28.057763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:30.910 [2024-11-20 14:07:28.057774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:46:30.910 [2024-11-20 14:07:28.057785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.910 [2024-11-20 14:07:28.113532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:30.910 [2024-11-20 14:07:28.113576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:30.910 [2024-11-20 14:07:28.113591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:30.910 [2024-11-20 14:07:28.113602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.910 [2024-11-20 14:07:28.113697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:30.910 [2024-11-20 14:07:28.113709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:30.910 [2024-11-20 14:07:28.113720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:30.910 [2024-11-20 14:07:28.113730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.910 [2024-11-20 14:07:28.113780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:30.910 [2024-11-20 14:07:28.113793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:30.910 [2024-11-20 14:07:28.113804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:30.910 [2024-11-20 14:07:28.113814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:30.910 [2024-11-20 14:07:28.113833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:30.910 [2024-11-20 14:07:28.113849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:30.910 [2024-11-20 14:07:28.113859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:30.910 [2024-11-20 14:07:28.113869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:31.169 [2024-11-20 14:07:28.238168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:31.169 [2024-11-20 14:07:28.238405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:31.169 [2024-11-20 14:07:28.238431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:31.169 [2024-11-20 14:07:28.238443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:31.169 [2024-11-20 14:07:28.339564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:31.169 [2024-11-20 14:07:28.339821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:31.169 [2024-11-20 14:07:28.339845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:31.169 [2024-11-20 14:07:28.339856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:31.169 [2024-11-20 14:07:28.339956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:31.169 [2024-11-20 14:07:28.339969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:31.169 [2024-11-20 14:07:28.339981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:31.169 [2024-11-20 14:07:28.340001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:31.169 [2024-11-20 14:07:28.340030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:31.169 [2024-11-20 14:07:28.340041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:31.169 [2024-11-20 14:07:28.340059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:31.169 [2024-11-20 14:07:28.340069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:31.169 [2024-11-20 14:07:28.340193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:31.169 [2024-11-20 14:07:28.340206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:31.169 [2024-11-20 14:07:28.340217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:31.169 [2024-11-20 14:07:28.340226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:31.169 [2024-11-20 14:07:28.340264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:31.169 [2024-11-20 14:07:28.340277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:31.169 [2024-11-20 14:07:28.340291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:31.169 [2024-11-20 14:07:28.340301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:31.169 [2024-11-20 14:07:28.340338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:31.169 [2024-11-20 14:07:28.340350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:31.169 [2024-11-20 14:07:28.340360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:31.169 [2024-11-20 14:07:28.340370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:31.169 [2024-11-20 14:07:28.340415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:31.169 [2024-11-20 14:07:28.340426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:31.169 [2024-11-20 14:07:28.340440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:31.169 [2024-11-20 14:07:28.340450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:31.169 [2024-11-20 14:07:28.340612] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 537.213 ms, result 0 00:46:32.105 00:46:32.105 00:46:32.364 14:07:29 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79317 00:46:32.364 14:07:29 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:46:32.364 14:07:29 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79317 00:46:32.364 14:07:29 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79317 ']' 00:46:32.364 14:07:29 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:32.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:32.364 14:07:29 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:32.364 14:07:29 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:32.364 14:07:29 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:32.364 14:07:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:46:32.364 [2024-11-20 14:07:29.574981] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:46:32.364 [2024-11-20 14:07:29.575149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79317 ] 00:46:32.623 [2024-11-20 14:07:29.762285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:32.623 [2024-11-20 14:07:29.865156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:33.560 14:07:30 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:33.560 14:07:30 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:46:33.560 14:07:30 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:46:33.819 [2024-11-20 14:07:31.023456] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:33.819 [2024-11-20 14:07:31.023522] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:34.080 [2024-11-20 14:07:31.209525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.080 [2024-11-20 14:07:31.209730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:34.080 [2024-11-20 14:07:31.209764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:46:34.080 [2024-11-20 14:07:31.209776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.080 [2024-11-20 14:07:31.213427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.080 [2024-11-20 14:07:31.213589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:34.080 [2024-11-20 14:07:31.213616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.620 ms 00:46:34.080 [2024-11-20 14:07:31.213628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.080 [2024-11-20 14:07:31.213738] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:34.080 [2024-11-20 14:07:31.214929] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:34.080 [2024-11-20 14:07:31.214982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.080 [2024-11-20 14:07:31.214996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:34.080 [2024-11-20 14:07:31.215016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:46:34.080 [2024-11-20 14:07:31.215031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.080 [2024-11-20 14:07:31.216728] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:46:34.080 [2024-11-20 14:07:31.236845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.080 [2024-11-20 14:07:31.236887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:46:34.080 [2024-11-20 14:07:31.236917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.121 ms 00:46:34.080 [2024-11-20 14:07:31.236931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.080 [2024-11-20 14:07:31.237029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.080 [2024-11-20 14:07:31.237047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:46:34.080 [2024-11-20 14:07:31.237058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:46:34.080 [2024-11-20 14:07:31.237071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.080 [2024-11-20 14:07:31.243956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.080 [2024-11-20 14:07:31.244142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:34.080 [2024-11-20 14:07:31.244164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.834 ms 00:46:34.080 [2024-11-20 14:07:31.244180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.080 [2024-11-20 14:07:31.244328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.080 [2024-11-20 14:07:31.244348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:34.080 [2024-11-20 14:07:31.244360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:46:34.080 [2024-11-20 14:07:31.244376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.080 [2024-11-20 14:07:31.244412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.080 [2024-11-20 14:07:31.244429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:34.080 [2024-11-20 14:07:31.244440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:46:34.080 [2024-11-20 14:07:31.244455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.080 [2024-11-20 14:07:31.244501] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:46:34.080 [2024-11-20 14:07:31.249354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.080 [2024-11-20 14:07:31.249387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:34.080 [2024-11-20 14:07:31.249404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.874 ms 00:46:34.080 [2024-11-20 14:07:31.249415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.080 [2024-11-20 14:07:31.249579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.080 [2024-11-20 14:07:31.249598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:34.080 [2024-11-20 14:07:31.249614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:46:34.080 [2024-11-20 14:07:31.249631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.080 [2024-11-20 14:07:31.249662] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:46:34.080 [2024-11-20 14:07:31.249686] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:46:34.080 [2024-11-20 14:07:31.249732] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:46:34.080 [2024-11-20 14:07:31.249752] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:46:34.080 [2024-11-20 14:07:31.249848] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:46:34.080 [2024-11-20 14:07:31.249861] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:34.080 [2024-11-20 14:07:31.249882] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:46:34.080 [2024-11-20 14:07:31.249896] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:34.080 [2024-11-20 14:07:31.249911] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:34.080 [2024-11-20 14:07:31.249923] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:46:34.080 [2024-11-20 14:07:31.249935] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:34.080 [2024-11-20 14:07:31.249945] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:46:34.080 [2024-11-20 14:07:31.249960] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:46:34.080 [2024-11-20 14:07:31.249971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.080 [2024-11-20 14:07:31.249984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:34.080 [2024-11-20 14:07:31.249994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:46:34.080 [2024-11-20 14:07:31.250007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.080 [2024-11-20 14:07:31.250087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.080 [2024-11-20 14:07:31.250100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:34.080 [2024-11-20 14:07:31.250111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:46:34.080 [2024-11-20 14:07:31.250124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.080 [2024-11-20 14:07:31.250216] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:34.080 [2024-11-20 14:07:31.250231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:34.080 [2024-11-20 14:07:31.250243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:34.080 [2024-11-20 14:07:31.250256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:34.080 [2024-11-20 14:07:31.250266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:34.080 [2024-11-20 14:07:31.250278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:34.080 [2024-11-20 14:07:31.250288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:46:34.080 [2024-11-20 14:07:31.250305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:34.080 [2024-11-20 14:07:31.250315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:46:34.080 [2024-11-20 14:07:31.250327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:34.080 [2024-11-20 14:07:31.250337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:34.080 [2024-11-20 14:07:31.250349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:46:34.080 [2024-11-20 14:07:31.250358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:34.080 [2024-11-20 14:07:31.250370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:34.080 [2024-11-20 14:07:31.250380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:46:34.080 [2024-11-20 14:07:31.250393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:34.080 [2024-11-20 14:07:31.250402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:34.080 [2024-11-20 14:07:31.250414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:46:34.080 [2024-11-20 14:07:31.250424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:34.080 [2024-11-20 14:07:31.250435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:34.080 [2024-11-20 14:07:31.250454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:46:34.080 [2024-11-20 14:07:31.250466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:34.080 [2024-11-20 14:07:31.250475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:34.080 [2024-11-20 14:07:31.250502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:46:34.080 [2024-11-20 14:07:31.250511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:34.080 [2024-11-20 14:07:31.250523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:34.080 [2024-11-20 14:07:31.250533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:46:34.080 [2024-11-20 14:07:31.250545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:34.080 [2024-11-20 14:07:31.250555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:34.080 [2024-11-20 14:07:31.250570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:46:34.080 [2024-11-20 14:07:31.250580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:34.081 [2024-11-20 14:07:31.250595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:34.081 [2024-11-20 14:07:31.250604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:46:34.081 [2024-11-20 14:07:31.250620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:34.081 [2024-11-20 14:07:31.250631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:34.081 [2024-11-20 14:07:31.250660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:46:34.081 [2024-11-20 14:07:31.250670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:34.081 [2024-11-20 14:07:31.250689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:46:34.081 [2024-11-20 14:07:31.250699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:46:34.081 [2024-11-20 14:07:31.250717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:34.081 [2024-11-20 14:07:31.250727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:46:34.081 [2024-11-20 14:07:31.250742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:46:34.081 [2024-11-20 14:07:31.250752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:34.081 [2024-11-20 14:07:31.250766] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:34.081 [2024-11-20 14:07:31.250781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:34.081 [2024-11-20 14:07:31.250797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:34.081 [2024-11-20 14:07:31.250807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:34.081 [2024-11-20 14:07:31.250822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:34.081 [2024-11-20 14:07:31.250832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:34.081 [2024-11-20 14:07:31.250849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:34.081 [2024-11-20 14:07:31.250859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:34.081 [2024-11-20 14:07:31.250871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:34.081 [2024-11-20 14:07:31.250881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:34.081 [2024-11-20 14:07:31.250894] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:34.081 [2024-11-20 14:07:31.250907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:34.081 [2024-11-20 14:07:31.250924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:46:34.081 [2024-11-20 14:07:31.250934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:46:34.081 [2024-11-20 14:07:31.250949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:46:34.081 [2024-11-20 14:07:31.250961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:46:34.081 [2024-11-20 14:07:31.250973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:46:34.081 [2024-11-20 14:07:31.250984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:46:34.081 [2024-11-20 14:07:31.250996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:46:34.081 [2024-11-20 14:07:31.251007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:46:34.081 [2024-11-20 14:07:31.251021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:46:34.081 [2024-11-20 14:07:31.251032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:46:34.081 [2024-11-20 14:07:31.251044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:46:34.081 [2024-11-20 14:07:31.251055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:46:34.081 [2024-11-20 14:07:31.251068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:46:34.081 [2024-11-20 14:07:31.251079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:46:34.081 [2024-11-20 14:07:31.251091] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:34.081 [2024-11-20 14:07:31.251103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:34.081 [2024-11-20 14:07:31.251120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:34.081 [2024-11-20 14:07:31.251131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:34.081 [2024-11-20 14:07:31.251143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:34.081 [2024-11-20 14:07:31.251154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:34.081 [2024-11-20 14:07:31.251167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.081 [2024-11-20 14:07:31.251178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:34.081 [2024-11-20 14:07:31.251191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:46:34.081 [2024-11-20 14:07:31.251202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.081 [2024-11-20 14:07:31.290266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.081 [2024-11-20 14:07:31.290307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:34.081 [2024-11-20 14:07:31.290324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.996 ms 00:46:34.081 [2024-11-20 14:07:31.290338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.081 [2024-11-20 14:07:31.290494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.081 [2024-11-20 14:07:31.290508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:34.081 [2024-11-20 14:07:31.290522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:46:34.081 [2024-11-20 14:07:31.290533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.081 [2024-11-20 14:07:31.338055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.081 [2024-11-20 14:07:31.338105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:34.081 [2024-11-20 14:07:31.338125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.490 ms 00:46:34.081 [2024-11-20 14:07:31.338136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.081 [2024-11-20 14:07:31.338254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.081 [2024-11-20 14:07:31.338267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:34.081 [2024-11-20 14:07:31.338284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:46:34.081 [2024-11-20 14:07:31.338294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.081 [2024-11-20 14:07:31.338757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.081 [2024-11-20 14:07:31.338772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:34.081 [2024-11-20 14:07:31.338794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:46:34.081 [2024-11-20 14:07:31.338804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.081 [2024-11-20 14:07:31.338932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.081 [2024-11-20 14:07:31.338945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:34.081 [2024-11-20 14:07:31.338961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:46:34.081 [2024-11-20 14:07:31.338971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.081 [2024-11-20 14:07:31.361218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.081 [2024-11-20 14:07:31.361262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:34.081 [2024-11-20 14:07:31.361282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.218 ms 00:46:34.081 [2024-11-20 14:07:31.361294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.081 [2024-11-20 14:07:31.395272] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:46:34.081 [2024-11-20 14:07:31.395317] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:46:34.081 [2024-11-20 14:07:31.395340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.081 [2024-11-20 14:07:31.395352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:46:34.081 [2024-11-20 14:07:31.395369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.900 ms 00:46:34.081 [2024-11-20 14:07:31.395380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.341 [2024-11-20 14:07:31.426219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.341 [2024-11-20 14:07:31.426282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:46:34.341 [2024-11-20 14:07:31.426306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.733 ms 00:46:34.341 [2024-11-20 14:07:31.426317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.341 [2024-11-20 14:07:31.445079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.341 [2024-11-20 14:07:31.445247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:46:34.341 [2024-11-20 14:07:31.445284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.657 ms 00:46:34.341 [2024-11-20 14:07:31.445296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.341 [2024-11-20 14:07:31.463504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.341 [2024-11-20 14:07:31.463541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:46:34.341 [2024-11-20 14:07:31.463561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.104 ms 00:46:34.341 [2024-11-20 14:07:31.463571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.341 [2024-11-20 14:07:31.464392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.341 [2024-11-20 14:07:31.464427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:34.341 [2024-11-20 14:07:31.464445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:46:34.341 [2024-11-20 14:07:31.464455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.341 [2024-11-20 14:07:31.552072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.341 [2024-11-20 14:07:31.552339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:46:34.341 [2024-11-20 14:07:31.552370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.561 ms 00:46:34.341 [2024-11-20 14:07:31.552382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.341 [2024-11-20 14:07:31.563473] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:46:34.341 [2024-11-20 14:07:31.580169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.341 [2024-11-20 14:07:31.580241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:34.341 [2024-11-20 14:07:31.580261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.636 ms 00:46:34.341 [2024-11-20 14:07:31.580285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.341 [2024-11-20 14:07:31.580405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.341 [2024-11-20 14:07:31.580425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:46:34.341 [2024-11-20 14:07:31.580437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:46:34.341 [2024-11-20 14:07:31.580452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.341 [2024-11-20 14:07:31.580550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.341 [2024-11-20 14:07:31.580568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:34.341 [2024-11-20 14:07:31.580580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:46:34.341 [2024-11-20 14:07:31.580601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.341 [2024-11-20 14:07:31.580628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.341 [2024-11-20 14:07:31.580645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:34.341 [2024-11-20 14:07:31.580655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:46:34.341 [2024-11-20 14:07:31.580670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.341 [2024-11-20 14:07:31.580713] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:46:34.341 [2024-11-20 14:07:31.580736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.341 [2024-11-20 14:07:31.580746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:46:34.341 [2024-11-20 14:07:31.580767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:46:34.341 [2024-11-20 14:07:31.580777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.341 [2024-11-20 14:07:31.618083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.341 [2024-11-20 14:07:31.618142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:34.341 [2024-11-20 14:07:31.618163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.265 ms 00:46:34.341 [2024-11-20 14:07:31.618174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.341 [2024-11-20 14:07:31.618297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.341 [2024-11-20 14:07:31.618312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:34.341 [2024-11-20 14:07:31.618329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:46:34.341 [2024-11-20 14:07:31.618345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.341 [2024-11-20 14:07:31.619336] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:34.341 [2024-11-20 14:07:31.623716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 409.468 ms, result 0 00:46:34.341 [2024-11-20 14:07:31.624916] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:34.341 Some configs were skipped because the RPC state that can call them passed over. 00:46:34.600 14:07:31 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:46:34.600 [2024-11-20 14:07:31.921332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.600 [2024-11-20 14:07:31.921620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:46:34.600 [2024-11-20 14:07:31.921731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:46:34.600 [2024-11-20 14:07:31.921784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.867 [2024-11-20 14:07:31.922035] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.355 ms, result 0 00:46:34.867 true 00:46:34.867 14:07:31 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:46:34.867 [2024-11-20 14:07:32.112998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.867 [2024-11-20 14:07:32.113054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:46:34.867 [2024-11-20 14:07:32.113078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:46:34.867 [2024-11-20 14:07:32.113090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.867 [2024-11-20 14:07:32.113143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.274 ms, result 0 00:46:34.867 true 00:46:34.867 14:07:32 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79317 00:46:34.867 14:07:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79317 ']' 00:46:34.867 14:07:32 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79317 00:46:34.867 14:07:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:46:34.867 14:07:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:34.867 14:07:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79317 00:46:34.867 killing process with pid 79317 00:46:34.867 14:07:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:34.867 14:07:32 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:34.867 14:07:32 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79317' 00:46:34.867 14:07:32 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79317 00:46:34.867 14:07:32 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79317 00:46:36.253 [2024-11-20 14:07:33.315959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.253 [2024-11-20 14:07:33.316026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:36.253 [2024-11-20 14:07:33.316043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:46:36.253 [2024-11-20 14:07:33.316056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.253 [2024-11-20 14:07:33.316082] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:46:36.253 [2024-11-20 14:07:33.320611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.253 [2024-11-20 14:07:33.320644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:36.253 [2024-11-20 14:07:33.320663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.506 ms 00:46:36.253 [2024-11-20 14:07:33.320673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.253 [2024-11-20 14:07:33.320923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.253 [2024-11-20 14:07:33.320940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:36.253 [2024-11-20 14:07:33.320954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:46:36.253 [2024-11-20 14:07:33.320964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.253 [2024-11-20 14:07:33.324386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.253 [2024-11-20 14:07:33.324427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:36.253 [2024-11-20 14:07:33.324447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.376 ms 00:46:36.253 [2024-11-20 14:07:33.324458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.253 [2024-11-20 14:07:33.330329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.253 [2024-11-20 14:07:33.330366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:46:36.253 [2024-11-20 14:07:33.330381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.793 ms 00:46:36.253 [2024-11-20 14:07:33.330392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.253 [2024-11-20 14:07:33.345846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.253 [2024-11-20 14:07:33.346023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:36.253 [2024-11-20 14:07:33.346057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.392 ms 00:46:36.253 [2024-11-20 14:07:33.346080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.253 [2024-11-20 14:07:33.357305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.253 [2024-11-20 14:07:33.357461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:36.253 [2024-11-20 14:07:33.357507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.138 ms 00:46:36.253 [2024-11-20 14:07:33.357519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.253 [2024-11-20 14:07:33.357695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.253 [2024-11-20 14:07:33.357717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:36.253 [2024-11-20 14:07:33.357731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:46:36.253 [2024-11-20 14:07:33.357742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.253 [2024-11-20 14:07:33.373684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.253 [2024-11-20 14:07:33.373718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:46:36.253 [2024-11-20 14:07:33.373733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.917 ms 00:46:36.253 [2024-11-20 14:07:33.373743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.253 [2024-11-20 14:07:33.388689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.253 [2024-11-20 14:07:33.388722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:46:36.253 [2024-11-20 14:07:33.388741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.892 ms 00:46:36.253 [2024-11-20 14:07:33.388751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.253 [2024-11-20 14:07:33.403399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.253 [2024-11-20 14:07:33.403581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:36.253 [2024-11-20 14:07:33.403611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.577 ms 00:46:36.253 [2024-11-20 14:07:33.403621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.253 [2024-11-20 14:07:33.418199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.253 [2024-11-20 14:07:33.418234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:36.253 [2024-11-20 14:07:33.418251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.493 ms 00:46:36.253 [2024-11-20 14:07:33.418261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.253 [2024-11-20 14:07:33.418321] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:36.253 [2024-11-20 14:07:33.418340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:36.253 [2024-11-20 14:07:33.418844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.418855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.418871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.418881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.418901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.418912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.418928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.418939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.418954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.418965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.418980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.418991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:36.254 [2024-11-20 14:07:33.419791] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:36.254 [2024-11-20 14:07:33.419816] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1f0a796-101e-4f30-a82e-71441876703e 00:46:36.254 [2024-11-20 14:07:33.419841] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:46:36.254 [2024-11-20 14:07:33.419863] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:46:36.254 [2024-11-20 14:07:33.419874] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:46:36.254 [2024-11-20 14:07:33.419890] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:46:36.255 [2024-11-20 14:07:33.419899] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:36.255 [2024-11-20 14:07:33.419912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:36.255 [2024-11-20 14:07:33.419922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:36.255 [2024-11-20 14:07:33.419934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:36.255 [2024-11-20 14:07:33.419944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:36.255 [2024-11-20 14:07:33.419957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.255 [2024-11-20 14:07:33.419967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:36.255 [2024-11-20 14:07:33.419981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:46:36.255 [2024-11-20 14:07:33.419992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.255 [2024-11-20 14:07:33.441895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.255 [2024-11-20 14:07:33.441930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:36.255 [2024-11-20 14:07:33.441949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.872 ms 00:46:36.255 [2024-11-20 14:07:33.441960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.255 [2024-11-20 14:07:33.442570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.255 [2024-11-20 14:07:33.442594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:36.255 [2024-11-20 14:07:33.442610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:46:36.255 [2024-11-20 14:07:33.442624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.255 [2024-11-20 14:07:33.514990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:36.255 [2024-11-20 14:07:33.515043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:36.255 [2024-11-20 14:07:33.515064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:36.255 [2024-11-20 14:07:33.515074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.255 [2024-11-20 14:07:33.515221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:36.255 [2024-11-20 14:07:33.515236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:36.255 [2024-11-20 14:07:33.515252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:36.255 [2024-11-20 14:07:33.515268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.255 [2024-11-20 14:07:33.515329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:36.255 [2024-11-20 14:07:33.515342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:36.255 [2024-11-20 14:07:33.515362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:36.255 [2024-11-20 14:07:33.515373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.255 [2024-11-20 14:07:33.515397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:36.255 [2024-11-20 14:07:33.515408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:36.255 [2024-11-20 14:07:33.515423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:36.255 [2024-11-20 14:07:33.515434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.514 [2024-11-20 14:07:33.644077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:36.514 [2024-11-20 14:07:33.644136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:36.514 [2024-11-20 14:07:33.644155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:36.514 [2024-11-20 14:07:33.644165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.514 [2024-11-20 14:07:33.749384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:36.514 [2024-11-20 14:07:33.749442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:36.514 [2024-11-20 14:07:33.749460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:36.514 [2024-11-20 14:07:33.749475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.514 [2024-11-20 14:07:33.749609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:36.514 [2024-11-20 14:07:33.749622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:36.514 [2024-11-20 14:07:33.749639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:36.514 [2024-11-20 14:07:33.749650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.514 [2024-11-20 14:07:33.749682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:36.514 [2024-11-20 14:07:33.749693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:36.514 [2024-11-20 14:07:33.749706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:36.514 [2024-11-20 14:07:33.749716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.514 [2024-11-20 14:07:33.749852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:36.514 [2024-11-20 14:07:33.749866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:36.514 [2024-11-20 14:07:33.749879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:36.514 [2024-11-20 14:07:33.749890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.514 [2024-11-20 14:07:33.749936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:36.514 [2024-11-20 14:07:33.749950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:36.514 [2024-11-20 14:07:33.749964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:36.514 [2024-11-20 14:07:33.749975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.514 [2024-11-20 14:07:33.750024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:36.514 [2024-11-20 14:07:33.750036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:36.514 [2024-11-20 14:07:33.750053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:36.514 [2024-11-20 14:07:33.750065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.514 [2024-11-20 14:07:33.750116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:36.514 [2024-11-20 14:07:33.750129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:36.514 [2024-11-20 14:07:33.750142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:36.514 [2024-11-20 14:07:33.750154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.514 [2024-11-20 14:07:33.750303] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 434.314 ms, result 0 00:46:37.892 14:07:34 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:46:37.893 [2024-11-20 14:07:34.938806] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:46:37.893 [2024-11-20 14:07:34.938934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79381 ] 00:46:37.893 [2024-11-20 14:07:35.110748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:38.151 [2024-11-20 14:07:35.230565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:38.410 [2024-11-20 14:07:35.610290] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:38.410 [2024-11-20 14:07:35.610573] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:38.669 [2024-11-20 14:07:35.773182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.669 [2024-11-20 14:07:35.773433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:38.669 [2024-11-20 14:07:35.773459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:46:38.669 [2024-11-20 14:07:35.773470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.669 [2024-11-20 14:07:35.776921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.669 [2024-11-20 14:07:35.776963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:38.670 [2024-11-20 14:07:35.776976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.404 ms 00:46:38.670 [2024-11-20 14:07:35.776986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.670 [2024-11-20 14:07:35.777097] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:38.670 [2024-11-20 14:07:35.778035] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:38.670 [2024-11-20 14:07:35.778070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.670 [2024-11-20 14:07:35.778082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:38.670 [2024-11-20 14:07:35.778093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:46:38.670 [2024-11-20 14:07:35.778104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.670 [2024-11-20 14:07:35.779820] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:46:38.670 [2024-11-20 14:07:35.799772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.670 [2024-11-20 14:07:35.799819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:46:38.670 [2024-11-20 14:07:35.799834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.953 ms 00:46:38.670 [2024-11-20 14:07:35.799845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.670 [2024-11-20 14:07:35.799952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.670 [2024-11-20 14:07:35.799967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:46:38.670 [2024-11-20 14:07:35.799978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:46:38.670 [2024-11-20 14:07:35.799988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.670 [2024-11-20 14:07:35.807085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.670 [2024-11-20 14:07:35.807117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:38.670 [2024-11-20 14:07:35.807130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.052 ms 00:46:38.670 [2024-11-20 14:07:35.807142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.670 [2024-11-20 14:07:35.807253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.670 [2024-11-20 14:07:35.807269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:38.670 [2024-11-20 14:07:35.807282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:46:38.670 [2024-11-20 14:07:35.807293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.670 [2024-11-20 14:07:35.807326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.670 [2024-11-20 14:07:35.807342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:38.670 [2024-11-20 14:07:35.807354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:46:38.670 [2024-11-20 14:07:35.807366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.670 [2024-11-20 14:07:35.807394] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:46:38.670 [2024-11-20 14:07:35.812518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.670 [2024-11-20 14:07:35.812553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:38.670 [2024-11-20 14:07:35.812566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.131 ms 00:46:38.670 [2024-11-20 14:07:35.812577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.670 [2024-11-20 14:07:35.812653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.670 [2024-11-20 14:07:35.812667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:38.670 [2024-11-20 14:07:35.812679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:46:38.670 [2024-11-20 14:07:35.812691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.670 [2024-11-20 14:07:35.812716] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:46:38.670 [2024-11-20 14:07:35.812744] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:46:38.670 [2024-11-20 14:07:35.812785] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:46:38.670 [2024-11-20 14:07:35.812805] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:46:38.670 [2024-11-20 14:07:35.812908] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:46:38.670 [2024-11-20 14:07:35.812922] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:38.670 [2024-11-20 14:07:35.812936] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:46:38.670 [2024-11-20 14:07:35.812949] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:38.670 [2024-11-20 14:07:35.812966] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:38.670 [2024-11-20 14:07:35.812977] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:46:38.670 [2024-11-20 14:07:35.812987] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:38.670 [2024-11-20 14:07:35.812997] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:46:38.670 [2024-11-20 14:07:35.813007] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:46:38.670 [2024-11-20 14:07:35.813018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.670 [2024-11-20 14:07:35.813028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:38.670 [2024-11-20 14:07:35.813039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:46:38.670 [2024-11-20 14:07:35.813050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.670 [2024-11-20 14:07:35.813127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.670 [2024-11-20 14:07:35.813142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:38.670 [2024-11-20 14:07:35.813153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:46:38.670 [2024-11-20 14:07:35.813163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.670 [2024-11-20 14:07:35.813257] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:38.670 [2024-11-20 14:07:35.813270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:38.670 [2024-11-20 14:07:35.813281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:38.670 [2024-11-20 14:07:35.813292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:38.670 [2024-11-20 14:07:35.813302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:38.670 [2024-11-20 14:07:35.813312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:38.670 [2024-11-20 14:07:35.813322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:46:38.670 [2024-11-20 14:07:35.813332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:38.670 [2024-11-20 14:07:35.813341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:46:38.670 [2024-11-20 14:07:35.813350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:38.670 [2024-11-20 14:07:35.813360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:38.670 [2024-11-20 14:07:35.813369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:46:38.670 [2024-11-20 14:07:35.813380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:38.670 [2024-11-20 14:07:35.813400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:38.670 [2024-11-20 14:07:35.813411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:46:38.670 [2024-11-20 14:07:35.813420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:38.670 [2024-11-20 14:07:35.813429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:38.670 [2024-11-20 14:07:35.813438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:46:38.670 [2024-11-20 14:07:35.813447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:38.670 [2024-11-20 14:07:35.813456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:38.670 [2024-11-20 14:07:35.813466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:46:38.670 [2024-11-20 14:07:35.813476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:38.670 [2024-11-20 14:07:35.813485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:38.670 [2024-11-20 14:07:35.813514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:46:38.670 [2024-11-20 14:07:35.813523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:38.670 [2024-11-20 14:07:35.813533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:38.670 [2024-11-20 14:07:35.813542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:46:38.670 [2024-11-20 14:07:35.813552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:38.670 [2024-11-20 14:07:35.813561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:38.670 [2024-11-20 14:07:35.813571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:46:38.670 [2024-11-20 14:07:35.813580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:38.670 [2024-11-20 14:07:35.813589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:38.670 [2024-11-20 14:07:35.813616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:46:38.670 [2024-11-20 14:07:35.813625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:38.670 [2024-11-20 14:07:35.813635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:38.670 [2024-11-20 14:07:35.813645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:46:38.670 [2024-11-20 14:07:35.813654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:38.670 [2024-11-20 14:07:35.813663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:46:38.670 [2024-11-20 14:07:35.813673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:46:38.670 [2024-11-20 14:07:35.813682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:38.670 [2024-11-20 14:07:35.813691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:46:38.670 [2024-11-20 14:07:35.813700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:46:38.670 [2024-11-20 14:07:35.813710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:38.671 [2024-11-20 14:07:35.813720] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:38.671 [2024-11-20 14:07:35.813731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:38.671 [2024-11-20 14:07:35.813741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:38.671 [2024-11-20 14:07:35.813754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:38.671 [2024-11-20 14:07:35.813766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:38.671 [2024-11-20 14:07:35.813776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:38.671 [2024-11-20 14:07:35.813785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:38.671 [2024-11-20 14:07:35.813795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:38.671 [2024-11-20 14:07:35.813804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:38.671 [2024-11-20 14:07:35.813813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:38.671 [2024-11-20 14:07:35.813824] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:38.671 [2024-11-20 14:07:35.813836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:38.671 [2024-11-20 14:07:35.813848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:46:38.671 [2024-11-20 14:07:35.813858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:46:38.671 [2024-11-20 14:07:35.813869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:46:38.671 [2024-11-20 14:07:35.813879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:46:38.671 [2024-11-20 14:07:35.813889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:46:38.671 [2024-11-20 14:07:35.813900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:46:38.671 [2024-11-20 14:07:35.813911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:46:38.671 [2024-11-20 14:07:35.813921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:46:38.671 [2024-11-20 14:07:35.813931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:46:38.671 [2024-11-20 14:07:35.813941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:46:38.671 [2024-11-20 14:07:35.813952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:46:38.671 [2024-11-20 14:07:35.813962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:46:38.671 [2024-11-20 14:07:35.813972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:46:38.671 [2024-11-20 14:07:35.813982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:46:38.671 [2024-11-20 14:07:35.813993] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:38.671 [2024-11-20 14:07:35.814004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:38.671 [2024-11-20 14:07:35.814015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:38.671 [2024-11-20 14:07:35.814025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:38.671 [2024-11-20 14:07:35.814035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:38.671 [2024-11-20 14:07:35.814046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:38.671 [2024-11-20 14:07:35.814057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.671 [2024-11-20 14:07:35.814067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:38.671 [2024-11-20 14:07:35.814082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:46:38.671 [2024-11-20 14:07:35.814092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.671 [2024-11-20 14:07:35.854918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.671 [2024-11-20 14:07:35.854969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:38.671 [2024-11-20 14:07:35.854985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.746 ms 00:46:38.671 [2024-11-20 14:07:35.854996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.671 [2024-11-20 14:07:35.855154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.671 [2024-11-20 14:07:35.855172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:38.671 [2024-11-20 14:07:35.855183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:46:38.671 [2024-11-20 14:07:35.855194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.671 [2024-11-20 14:07:35.908865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.671 [2024-11-20 14:07:35.908921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:38.671 [2024-11-20 14:07:35.908937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.645 ms 00:46:38.671 [2024-11-20 14:07:35.908952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.671 [2024-11-20 14:07:35.909085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.671 [2024-11-20 14:07:35.909098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:38.671 [2024-11-20 14:07:35.909110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:46:38.671 [2024-11-20 14:07:35.909121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.671 [2024-11-20 14:07:35.909592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.671 [2024-11-20 14:07:35.909620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:38.671 [2024-11-20 14:07:35.909632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:46:38.671 [2024-11-20 14:07:35.909653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.671 [2024-11-20 14:07:35.909776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.671 [2024-11-20 14:07:35.909791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:38.671 [2024-11-20 14:07:35.909801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:46:38.671 [2024-11-20 14:07:35.909811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.671 [2024-11-20 14:07:35.931429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.671 [2024-11-20 14:07:35.931490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:38.671 [2024-11-20 14:07:35.931506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.594 ms 00:46:38.671 [2024-11-20 14:07:35.931517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.671 [2024-11-20 14:07:35.952391] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:46:38.671 [2024-11-20 14:07:35.952449] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:46:38.671 [2024-11-20 14:07:35.952467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.671 [2024-11-20 14:07:35.952508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:46:38.671 [2024-11-20 14:07:35.952524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.807 ms 00:46:38.671 [2024-11-20 14:07:35.952535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.671 [2024-11-20 14:07:35.984154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.671 [2024-11-20 14:07:35.984223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:46:38.671 [2024-11-20 14:07:35.984240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.502 ms 00:46:38.671 [2024-11-20 14:07:35.984251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.930 [2024-11-20 14:07:36.003141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.930 [2024-11-20 14:07:36.003185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:46:38.930 [2024-11-20 14:07:36.003200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.783 ms 00:46:38.930 [2024-11-20 14:07:36.003210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.930 [2024-11-20 14:07:36.022341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.930 [2024-11-20 14:07:36.022564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:46:38.930 [2024-11-20 14:07:36.022588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.042 ms 00:46:38.930 [2024-11-20 14:07:36.022599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.930 [2024-11-20 14:07:36.023399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.930 [2024-11-20 14:07:36.023435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:38.930 [2024-11-20 14:07:36.023448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:46:38.930 [2024-11-20 14:07:36.023459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.930 [2024-11-20 14:07:36.116803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.930 [2024-11-20 14:07:36.116863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:46:38.930 [2024-11-20 14:07:36.116881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.297 ms 00:46:38.930 [2024-11-20 14:07:36.116892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.930 [2024-11-20 14:07:36.129077] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:46:38.930 [2024-11-20 14:07:36.146019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.930 [2024-11-20 14:07:36.146274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:38.930 [2024-11-20 14:07:36.146301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.991 ms 00:46:38.930 [2024-11-20 14:07:36.146322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.930 [2024-11-20 14:07:36.146473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.930 [2024-11-20 14:07:36.146509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:46:38.930 [2024-11-20 14:07:36.146522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:46:38.930 [2024-11-20 14:07:36.146532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.930 [2024-11-20 14:07:36.146591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.930 [2024-11-20 14:07:36.146602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:38.930 [2024-11-20 14:07:36.146614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:46:38.930 [2024-11-20 14:07:36.146624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.930 [2024-11-20 14:07:36.146662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.930 [2024-11-20 14:07:36.146676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:38.930 [2024-11-20 14:07:36.146687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:46:38.930 [2024-11-20 14:07:36.146697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.930 [2024-11-20 14:07:36.146734] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:46:38.930 [2024-11-20 14:07:36.146746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.930 [2024-11-20 14:07:36.146756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:46:38.931 [2024-11-20 14:07:36.146766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:46:38.931 [2024-11-20 14:07:36.146777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.931 [2024-11-20 14:07:36.184888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.931 [2024-11-20 14:07:36.185058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:38.931 [2024-11-20 14:07:36.185081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.088 ms 00:46:38.931 [2024-11-20 14:07:36.185092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.931 [2024-11-20 14:07:36.185277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:38.931 [2024-11-20 14:07:36.185292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:38.931 [2024-11-20 14:07:36.185304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:46:38.931 [2024-11-20 14:07:36.185315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:38.931 [2024-11-20 14:07:36.186310] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:38.931 [2024-11-20 14:07:36.190735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 412.802 ms, result 0 00:46:38.931 [2024-11-20 14:07:36.191535] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:38.931 [2024-11-20 14:07:36.210496] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:40.308  [2024-11-20T14:07:38.566Z] Copying: 31/256 [MB] (31 MBps) [2024-11-20T14:07:39.504Z] Copying: 59/256 [MB] (27 MBps) [2024-11-20T14:07:40.439Z] Copying: 86/256 [MB] (27 MBps) [2024-11-20T14:07:41.376Z] Copying: 113/256 [MB] (27 MBps) [2024-11-20T14:07:42.311Z] Copying: 140/256 [MB] (27 MBps) [2024-11-20T14:07:43.687Z] Copying: 167/256 [MB] (26 MBps) [2024-11-20T14:07:44.621Z] Copying: 195/256 [MB] (27 MBps) [2024-11-20T14:07:45.558Z] Copying: 222/256 [MB] (27 MBps) [2024-11-20T14:07:45.558Z] Copying: 249/256 [MB] (27 MBps) [2024-11-20T14:07:45.862Z] Copying: 256/256 [MB] (average 27 MBps)[2024-11-20 14:07:45.799961] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:48.539 [2024-11-20 14:07:45.816561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.539 [2024-11-20 14:07:45.816615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:48.539 [2024-11-20 14:07:45.816631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:46:48.539 [2024-11-20 14:07:45.816664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.539 [2024-11-20 14:07:45.816690] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:46:48.539 [2024-11-20 14:07:45.820912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.539 [2024-11-20 14:07:45.820948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:48.539 [2024-11-20 14:07:45.820961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.204 ms 00:46:48.539 [2024-11-20 14:07:45.820987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.539 [2024-11-20 14:07:45.821235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.539 [2024-11-20 14:07:45.821248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:48.539 [2024-11-20 14:07:45.821259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:46:48.539 [2024-11-20 14:07:45.821270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.539 [2024-11-20 14:07:45.824477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.539 [2024-11-20 14:07:45.824516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:48.539 [2024-11-20 14:07:45.824528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.190 ms 00:46:48.539 [2024-11-20 14:07:45.824538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.539 [2024-11-20 14:07:45.830447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.539 [2024-11-20 14:07:45.830487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:46:48.539 [2024-11-20 14:07:45.830499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.885 ms 00:46:48.539 [2024-11-20 14:07:45.830509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.799 [2024-11-20 14:07:45.867550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.799 [2024-11-20 14:07:45.867591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:48.799 [2024-11-20 14:07:45.867607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.955 ms 00:46:48.799 [2024-11-20 14:07:45.867617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.799 [2024-11-20 14:07:45.889237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.799 [2024-11-20 14:07:45.889287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:48.799 [2024-11-20 14:07:45.889306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.520 ms 00:46:48.799 [2024-11-20 14:07:45.889318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.799 [2024-11-20 14:07:45.889517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.799 [2024-11-20 14:07:45.889534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:48.799 [2024-11-20 14:07:45.889553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:46:48.800 [2024-11-20 14:07:45.889563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.800 [2024-11-20 14:07:45.927454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.800 [2024-11-20 14:07:45.927519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:46:48.800 [2024-11-20 14:07:45.927533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.842 ms 00:46:48.800 [2024-11-20 14:07:45.927543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.800 [2024-11-20 14:07:45.964522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.800 [2024-11-20 14:07:45.964565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:46:48.800 [2024-11-20 14:07:45.964578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.917 ms 00:46:48.800 [2024-11-20 14:07:45.964605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.800 [2024-11-20 14:07:46.001058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.800 [2024-11-20 14:07:46.001099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:48.800 [2024-11-20 14:07:46.001114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.392 ms 00:46:48.800 [2024-11-20 14:07:46.001140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.800 [2024-11-20 14:07:46.037397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.800 [2024-11-20 14:07:46.037438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:48.800 [2024-11-20 14:07:46.037451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.131 ms 00:46:48.800 [2024-11-20 14:07:46.037462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.800 [2024-11-20 14:07:46.037531] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:48.800 [2024-11-20 14:07:46.037550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.037997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:48.800 [2024-11-20 14:07:46.038395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:48.801 [2024-11-20 14:07:46.038736] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:48.801 [2024-11-20 14:07:46.038746] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1f0a796-101e-4f30-a82e-71441876703e 00:46:48.801 [2024-11-20 14:07:46.038758] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:46:48.801 [2024-11-20 14:07:46.038770] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:46:48.801 [2024-11-20 14:07:46.038780] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:46:48.801 [2024-11-20 14:07:46.038791] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:46:48.801 [2024-11-20 14:07:46.038801] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:48.801 [2024-11-20 14:07:46.038812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:48.801 [2024-11-20 14:07:46.038823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:48.801 [2024-11-20 14:07:46.038833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:48.801 [2024-11-20 14:07:46.038843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:48.801 [2024-11-20 14:07:46.038854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.801 [2024-11-20 14:07:46.038870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:48.801 [2024-11-20 14:07:46.038881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.324 ms 00:46:48.801 [2024-11-20 14:07:46.038892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.801 [2024-11-20 14:07:46.058916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.801 [2024-11-20 14:07:46.058952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:48.801 [2024-11-20 14:07:46.058965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.001 ms 00:46:48.801 [2024-11-20 14:07:46.058991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.801 [2024-11-20 14:07:46.059535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:48.801 [2024-11-20 14:07:46.059553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:48.801 [2024-11-20 14:07:46.059565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:46:48.801 [2024-11-20 14:07:46.059575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.801 [2024-11-20 14:07:46.115970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:48.801 [2024-11-20 14:07:46.116014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:48.801 [2024-11-20 14:07:46.116029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:48.801 [2024-11-20 14:07:46.116040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.801 [2024-11-20 14:07:46.116136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:48.801 [2024-11-20 14:07:46.116148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:48.801 [2024-11-20 14:07:46.116158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:48.801 [2024-11-20 14:07:46.116168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.801 [2024-11-20 14:07:46.116223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:48.801 [2024-11-20 14:07:46.116236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:48.801 [2024-11-20 14:07:46.116247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:48.801 [2024-11-20 14:07:46.116257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:48.801 [2024-11-20 14:07:46.116276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:48.801 [2024-11-20 14:07:46.116291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:48.801 [2024-11-20 14:07:46.116302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:48.801 [2024-11-20 14:07:46.116312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:49.061 [2024-11-20 14:07:46.238723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:49.061 [2024-11-20 14:07:46.238786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:49.061 [2024-11-20 14:07:46.238801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:49.061 [2024-11-20 14:07:46.238812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:49.061 [2024-11-20 14:07:46.336760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:49.061 [2024-11-20 14:07:46.336821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:49.061 [2024-11-20 14:07:46.336836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:49.061 [2024-11-20 14:07:46.336848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:49.061 [2024-11-20 14:07:46.336930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:49.061 [2024-11-20 14:07:46.336942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:49.061 [2024-11-20 14:07:46.336953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:49.061 [2024-11-20 14:07:46.336963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:49.061 [2024-11-20 14:07:46.337010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:49.061 [2024-11-20 14:07:46.337022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:49.061 [2024-11-20 14:07:46.337039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:49.061 [2024-11-20 14:07:46.337050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:49.061 [2024-11-20 14:07:46.337171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:49.061 [2024-11-20 14:07:46.337186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:49.061 [2024-11-20 14:07:46.337197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:49.061 [2024-11-20 14:07:46.337208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:49.061 [2024-11-20 14:07:46.337248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:49.061 [2024-11-20 14:07:46.337261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:49.061 [2024-11-20 14:07:46.337273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:49.061 [2024-11-20 14:07:46.337288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:49.061 [2024-11-20 14:07:46.337331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:49.061 [2024-11-20 14:07:46.337343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:49.061 [2024-11-20 14:07:46.337354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:49.061 [2024-11-20 14:07:46.337365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:49.061 [2024-11-20 14:07:46.337411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:49.061 [2024-11-20 14:07:46.337432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:49.061 [2024-11-20 14:07:46.337446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:49.061 [2024-11-20 14:07:46.337457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:49.061 [2024-11-20 14:07:46.337622] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.062 ms, result 0 00:46:50.437 00:46:50.437 00:46:50.437 14:07:47 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:46:50.695 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:46:50.695 14:07:47 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:46:50.695 14:07:47 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:46:50.695 14:07:47 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:46:50.695 14:07:47 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:46:50.695 14:07:47 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:46:50.695 14:07:47 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:46:50.953 14:07:48 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79317 00:46:50.953 14:07:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79317 ']' 00:46:50.953 14:07:48 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79317 00:46:50.953 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79317) - No such process 00:46:50.953 Process with pid 79317 is not found 00:46:50.953 14:07:48 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79317 is not found' 00:46:50.953 00:46:50.953 real 1m10.815s 00:46:50.953 user 1m39.560s 00:46:50.953 sys 0m7.444s 00:46:50.953 14:07:48 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:50.953 14:07:48 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:46:50.953 ************************************ 00:46:50.953 END TEST ftl_trim 00:46:50.953 ************************************ 00:46:50.953 14:07:48 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:46:50.953 14:07:48 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:46:50.953 14:07:48 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:50.953 14:07:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:46:50.953 ************************************ 00:46:50.953 START TEST ftl_restore 00:46:50.953 ************************************ 00:46:50.953 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:46:50.953 * Looking for test storage... 00:46:50.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:46:50.953 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:46:50.953 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:46:50.953 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:46:50.953 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:46:50.953 14:07:48 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:50.953 14:07:48 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:50.953 14:07:48 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:50.953 14:07:48 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:46:50.953 14:07:48 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:46:50.954 14:07:48 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:46:50.954 14:07:48 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:46:50.954 14:07:48 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:46:50.954 14:07:48 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:46:50.954 14:07:48 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:46:50.954 14:07:48 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:50.954 14:07:48 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:46:50.954 14:07:48 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:46:50.954 14:07:48 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:50.954 14:07:48 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:50.954 14:07:48 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:46:51.213 14:07:48 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:46:51.213 14:07:48 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:51.213 14:07:48 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:46:51.213 14:07:48 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:46:51.213 14:07:48 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:46:51.213 14:07:48 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:46:51.213 14:07:48 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:51.213 14:07:48 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:46:51.213 14:07:48 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:46:51.213 14:07:48 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:51.213 14:07:48 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:51.213 14:07:48 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:46:51.213 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:51.213 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:46:51.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:51.213 --rc genhtml_branch_coverage=1 00:46:51.213 --rc genhtml_function_coverage=1 00:46:51.213 --rc genhtml_legend=1 00:46:51.213 --rc geninfo_all_blocks=1 00:46:51.213 --rc geninfo_unexecuted_blocks=1 00:46:51.213 00:46:51.213 ' 00:46:51.213 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:46:51.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:51.213 --rc genhtml_branch_coverage=1 00:46:51.213 --rc genhtml_function_coverage=1 00:46:51.213 --rc genhtml_legend=1 00:46:51.213 --rc geninfo_all_blocks=1 00:46:51.213 --rc geninfo_unexecuted_blocks=1 00:46:51.213 00:46:51.213 ' 00:46:51.213 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:46:51.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:51.213 --rc genhtml_branch_coverage=1 00:46:51.213 --rc genhtml_function_coverage=1 00:46:51.213 --rc genhtml_legend=1 00:46:51.213 --rc geninfo_all_blocks=1 00:46:51.213 --rc geninfo_unexecuted_blocks=1 00:46:51.213 00:46:51.213 ' 00:46:51.213 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:46:51.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:51.213 --rc genhtml_branch_coverage=1 00:46:51.213 --rc genhtml_function_coverage=1 00:46:51.213 --rc genhtml_legend=1 00:46:51.213 --rc geninfo_all_blocks=1 00:46:51.213 --rc geninfo_unexecuted_blocks=1 00:46:51.213 00:46:51.213 ' 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:46:51.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.8feM6JKzfa 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79587 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79587 00:46:51.213 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79587 ']' 00:46:51.213 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:51.213 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:51.213 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:51.213 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:51.213 14:07:48 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:51.213 14:07:48 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:46:51.213 [2024-11-20 14:07:48.460724] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:46:51.213 [2024-11-20 14:07:48.460900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79587 ] 00:46:51.472 [2024-11-20 14:07:48.664677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:51.731 [2024-11-20 14:07:48.839997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:52.711 14:07:49 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:52.711 14:07:49 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:46:52.712 14:07:49 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:46:52.712 14:07:49 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:46:52.712 14:07:49 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:46:52.712 14:07:49 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:46:52.712 14:07:49 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:46:52.712 14:07:49 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:46:52.712 14:07:50 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:46:52.712 14:07:50 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:46:52.712 14:07:50 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:46:52.712 14:07:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:46:52.712 14:07:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:46:52.712 14:07:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:46:52.712 14:07:50 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:46:52.969 14:07:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:46:52.969 14:07:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:46:52.969 { 00:46:52.969 "name": "nvme0n1", 00:46:52.969 "aliases": [ 00:46:52.969 "dcc0f02d-1d9d-4887-a337-282c8193470b" 00:46:52.969 ], 00:46:52.969 "product_name": "NVMe disk", 00:46:52.969 "block_size": 4096, 00:46:52.969 "num_blocks": 1310720, 00:46:52.969 "uuid": "dcc0f02d-1d9d-4887-a337-282c8193470b", 00:46:52.969 "numa_id": -1, 00:46:52.969 "assigned_rate_limits": { 00:46:52.969 "rw_ios_per_sec": 0, 00:46:52.969 "rw_mbytes_per_sec": 0, 00:46:52.969 "r_mbytes_per_sec": 0, 00:46:52.970 "w_mbytes_per_sec": 0 00:46:52.970 }, 00:46:52.970 "claimed": true, 00:46:52.970 "claim_type": "read_many_write_one", 00:46:52.970 "zoned": false, 00:46:52.970 "supported_io_types": { 00:46:52.970 "read": true, 00:46:52.970 "write": true, 00:46:52.970 "unmap": true, 00:46:52.970 "flush": true, 00:46:52.970 "reset": true, 00:46:52.970 "nvme_admin": true, 00:46:52.970 "nvme_io": true, 00:46:52.970 "nvme_io_md": false, 00:46:52.970 "write_zeroes": true, 00:46:52.970 "zcopy": false, 00:46:52.970 "get_zone_info": false, 00:46:52.970 "zone_management": false, 00:46:52.970 "zone_append": false, 00:46:52.970 "compare": true, 00:46:52.970 "compare_and_write": false, 00:46:52.970 "abort": true, 00:46:52.970 "seek_hole": false, 00:46:52.970 "seek_data": false, 00:46:52.970 "copy": true, 00:46:52.970 "nvme_iov_md": false 00:46:52.970 }, 00:46:52.970 "driver_specific": { 00:46:52.970 "nvme": [ 00:46:52.970 { 00:46:52.970 "pci_address": "0000:00:11.0", 00:46:52.970 "trid": { 00:46:52.970 "trtype": "PCIe", 00:46:52.970 "traddr": "0000:00:11.0" 00:46:52.970 }, 00:46:52.970 "ctrlr_data": { 00:46:52.970 "cntlid": 0, 00:46:52.970 "vendor_id": "0x1b36", 00:46:52.970 "model_number": "QEMU NVMe Ctrl", 00:46:52.970 "serial_number": "12341", 00:46:52.970 "firmware_revision": "8.0.0", 00:46:52.970 "subnqn": "nqn.2019-08.org.qemu:12341", 00:46:52.970 "oacs": { 00:46:52.970 "security": 0, 00:46:52.970 "format": 1, 00:46:52.970 "firmware": 0, 00:46:52.970 "ns_manage": 1 00:46:52.970 }, 00:46:52.970 "multi_ctrlr": false, 00:46:52.970 "ana_reporting": false 00:46:52.970 }, 00:46:52.970 "vs": { 00:46:52.970 "nvme_version": "1.4" 00:46:52.970 }, 00:46:52.970 "ns_data": { 00:46:52.970 "id": 1, 00:46:52.970 "can_share": false 00:46:52.970 } 00:46:52.970 } 00:46:52.970 ], 00:46:52.970 "mp_policy": "active_passive" 00:46:52.970 } 00:46:52.970 } 00:46:52.970 ]' 00:46:52.970 14:07:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:46:53.227 14:07:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:46:53.227 14:07:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:46:53.227 14:07:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:46:53.227 14:07:50 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:46:53.227 14:07:50 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:46:53.227 14:07:50 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:46:53.227 14:07:50 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:46:53.227 14:07:50 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:46:53.227 14:07:50 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:46:53.227 14:07:50 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:46:53.485 14:07:50 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=92f2641f-279d-4a37-8ee1-a05f39c3e4c9 00:46:53.485 14:07:50 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:46:53.485 14:07:50 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 92f2641f-279d-4a37-8ee1-a05f39c3e4c9 00:46:53.743 14:07:50 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:46:54.001 14:07:51 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=eda08c77-76d3-4558-a235-34a0ac2b9bac 00:46:54.001 14:07:51 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u eda08c77-76d3-4558-a235-34a0ac2b9bac 00:46:54.261 14:07:51 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=e6e9c566-b902-4b18-892c-bff355429a19 00:46:54.261 14:07:51 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:46:54.261 14:07:51 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e6e9c566-b902-4b18-892c-bff355429a19 00:46:54.261 14:07:51 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:46:54.261 14:07:51 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:46:54.261 14:07:51 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=e6e9c566-b902-4b18-892c-bff355429a19 00:46:54.261 14:07:51 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:46:54.261 14:07:51 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size e6e9c566-b902-4b18-892c-bff355429a19 00:46:54.261 14:07:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=e6e9c566-b902-4b18-892c-bff355429a19 00:46:54.261 14:07:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:46:54.261 14:07:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:46:54.261 14:07:51 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:46:54.261 14:07:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e6e9c566-b902-4b18-892c-bff355429a19 00:46:54.520 14:07:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:46:54.520 { 00:46:54.520 "name": "e6e9c566-b902-4b18-892c-bff355429a19", 00:46:54.520 "aliases": [ 00:46:54.520 "lvs/nvme0n1p0" 00:46:54.520 ], 00:46:54.520 "product_name": "Logical Volume", 00:46:54.520 "block_size": 4096, 00:46:54.520 "num_blocks": 26476544, 00:46:54.520 "uuid": "e6e9c566-b902-4b18-892c-bff355429a19", 00:46:54.520 "assigned_rate_limits": { 00:46:54.520 "rw_ios_per_sec": 0, 00:46:54.520 "rw_mbytes_per_sec": 0, 00:46:54.520 "r_mbytes_per_sec": 0, 00:46:54.520 "w_mbytes_per_sec": 0 00:46:54.520 }, 00:46:54.520 "claimed": false, 00:46:54.520 "zoned": false, 00:46:54.520 "supported_io_types": { 00:46:54.520 "read": true, 00:46:54.520 "write": true, 00:46:54.520 "unmap": true, 00:46:54.520 "flush": false, 00:46:54.520 "reset": true, 00:46:54.520 "nvme_admin": false, 00:46:54.520 "nvme_io": false, 00:46:54.520 "nvme_io_md": false, 00:46:54.520 "write_zeroes": true, 00:46:54.520 "zcopy": false, 00:46:54.520 "get_zone_info": false, 00:46:54.520 "zone_management": false, 00:46:54.520 "zone_append": false, 00:46:54.520 "compare": false, 00:46:54.520 "compare_and_write": false, 00:46:54.520 "abort": false, 00:46:54.520 "seek_hole": true, 00:46:54.520 "seek_data": true, 00:46:54.520 "copy": false, 00:46:54.520 "nvme_iov_md": false 00:46:54.520 }, 00:46:54.520 "driver_specific": { 00:46:54.520 "lvol": { 00:46:54.520 "lvol_store_uuid": "eda08c77-76d3-4558-a235-34a0ac2b9bac", 00:46:54.520 "base_bdev": "nvme0n1", 00:46:54.520 "thin_provision": true, 00:46:54.520 "num_allocated_clusters": 0, 00:46:54.520 "snapshot": false, 00:46:54.520 "clone": false, 00:46:54.520 "esnap_clone": false 00:46:54.520 } 00:46:54.520 } 00:46:54.520 } 00:46:54.520 ]' 00:46:54.520 14:07:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:46:54.520 14:07:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:46:54.520 14:07:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:46:54.520 14:07:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:46:54.520 14:07:51 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:46:54.520 14:07:51 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:46:54.520 14:07:51 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:46:54.520 14:07:51 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:46:54.520 14:07:51 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:46:54.779 14:07:52 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:46:54.779 14:07:52 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:46:54.779 14:07:52 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size e6e9c566-b902-4b18-892c-bff355429a19 00:46:54.779 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=e6e9c566-b902-4b18-892c-bff355429a19 00:46:54.779 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:46:54.779 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:46:54.779 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:46:54.779 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e6e9c566-b902-4b18-892c-bff355429a19 00:46:55.038 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:46:55.038 { 00:46:55.038 "name": "e6e9c566-b902-4b18-892c-bff355429a19", 00:46:55.038 "aliases": [ 00:46:55.038 "lvs/nvme0n1p0" 00:46:55.038 ], 00:46:55.038 "product_name": "Logical Volume", 00:46:55.038 "block_size": 4096, 00:46:55.038 "num_blocks": 26476544, 00:46:55.038 "uuid": "e6e9c566-b902-4b18-892c-bff355429a19", 00:46:55.038 "assigned_rate_limits": { 00:46:55.038 "rw_ios_per_sec": 0, 00:46:55.038 "rw_mbytes_per_sec": 0, 00:46:55.038 "r_mbytes_per_sec": 0, 00:46:55.038 "w_mbytes_per_sec": 0 00:46:55.038 }, 00:46:55.038 "claimed": false, 00:46:55.038 "zoned": false, 00:46:55.038 "supported_io_types": { 00:46:55.038 "read": true, 00:46:55.038 "write": true, 00:46:55.038 "unmap": true, 00:46:55.038 "flush": false, 00:46:55.038 "reset": true, 00:46:55.038 "nvme_admin": false, 00:46:55.038 "nvme_io": false, 00:46:55.038 "nvme_io_md": false, 00:46:55.038 "write_zeroes": true, 00:46:55.038 "zcopy": false, 00:46:55.038 "get_zone_info": false, 00:46:55.038 "zone_management": false, 00:46:55.038 "zone_append": false, 00:46:55.038 "compare": false, 00:46:55.038 "compare_and_write": false, 00:46:55.038 "abort": false, 00:46:55.038 "seek_hole": true, 00:46:55.038 "seek_data": true, 00:46:55.038 "copy": false, 00:46:55.038 "nvme_iov_md": false 00:46:55.038 }, 00:46:55.038 "driver_specific": { 00:46:55.038 "lvol": { 00:46:55.038 "lvol_store_uuid": "eda08c77-76d3-4558-a235-34a0ac2b9bac", 00:46:55.038 "base_bdev": "nvme0n1", 00:46:55.038 "thin_provision": true, 00:46:55.038 "num_allocated_clusters": 0, 00:46:55.038 "snapshot": false, 00:46:55.038 "clone": false, 00:46:55.038 "esnap_clone": false 00:46:55.038 } 00:46:55.038 } 00:46:55.038 } 00:46:55.038 ]' 00:46:55.038 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:46:55.038 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:46:55.297 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:46:55.297 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:46:55.297 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:46:55.297 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:46:55.297 14:07:52 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:46:55.297 14:07:52 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:46:55.556 14:07:52 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:46:55.556 14:07:52 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size e6e9c566-b902-4b18-892c-bff355429a19 00:46:55.556 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=e6e9c566-b902-4b18-892c-bff355429a19 00:46:55.556 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:46:55.556 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:46:55.556 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:46:55.556 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e6e9c566-b902-4b18-892c-bff355429a19 00:46:55.556 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:46:55.556 { 00:46:55.556 "name": "e6e9c566-b902-4b18-892c-bff355429a19", 00:46:55.556 "aliases": [ 00:46:55.556 "lvs/nvme0n1p0" 00:46:55.556 ], 00:46:55.556 "product_name": "Logical Volume", 00:46:55.556 "block_size": 4096, 00:46:55.556 "num_blocks": 26476544, 00:46:55.556 "uuid": "e6e9c566-b902-4b18-892c-bff355429a19", 00:46:55.556 "assigned_rate_limits": { 00:46:55.556 "rw_ios_per_sec": 0, 00:46:55.556 "rw_mbytes_per_sec": 0, 00:46:55.556 "r_mbytes_per_sec": 0, 00:46:55.556 "w_mbytes_per_sec": 0 00:46:55.556 }, 00:46:55.556 "claimed": false, 00:46:55.556 "zoned": false, 00:46:55.556 "supported_io_types": { 00:46:55.556 "read": true, 00:46:55.556 "write": true, 00:46:55.556 "unmap": true, 00:46:55.556 "flush": false, 00:46:55.556 "reset": true, 00:46:55.556 "nvme_admin": false, 00:46:55.556 "nvme_io": false, 00:46:55.556 "nvme_io_md": false, 00:46:55.556 "write_zeroes": true, 00:46:55.556 "zcopy": false, 00:46:55.556 "get_zone_info": false, 00:46:55.556 "zone_management": false, 00:46:55.556 "zone_append": false, 00:46:55.556 "compare": false, 00:46:55.556 "compare_and_write": false, 00:46:55.556 "abort": false, 00:46:55.556 "seek_hole": true, 00:46:55.556 "seek_data": true, 00:46:55.556 "copy": false, 00:46:55.556 "nvme_iov_md": false 00:46:55.556 }, 00:46:55.556 "driver_specific": { 00:46:55.556 "lvol": { 00:46:55.556 "lvol_store_uuid": "eda08c77-76d3-4558-a235-34a0ac2b9bac", 00:46:55.556 "base_bdev": "nvme0n1", 00:46:55.556 "thin_provision": true, 00:46:55.556 "num_allocated_clusters": 0, 00:46:55.556 "snapshot": false, 00:46:55.556 "clone": false, 00:46:55.556 "esnap_clone": false 00:46:55.556 } 00:46:55.556 } 00:46:55.556 } 00:46:55.556 ]' 00:46:55.556 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:46:55.817 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:46:55.817 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:46:55.817 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:46:55.817 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:46:55.817 14:07:52 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:46:55.817 14:07:52 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:46:55.817 14:07:52 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e6e9c566-b902-4b18-892c-bff355429a19 --l2p_dram_limit 10' 00:46:55.817 14:07:52 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:46:55.817 14:07:52 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:46:55.817 14:07:52 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:46:55.817 14:07:52 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:46:55.817 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:46:55.817 14:07:52 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e6e9c566-b902-4b18-892c-bff355429a19 --l2p_dram_limit 10 -c nvc0n1p0 00:46:55.817 [2024-11-20 14:07:53.102007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:55.817 [2024-11-20 14:07:53.102064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:55.817 [2024-11-20 14:07:53.102084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:46:55.817 [2024-11-20 14:07:53.102096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:55.817 [2024-11-20 14:07:53.102189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:55.817 [2024-11-20 14:07:53.102204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:55.817 [2024-11-20 14:07:53.102219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:46:55.817 [2024-11-20 14:07:53.102230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:55.817 [2024-11-20 14:07:53.102256] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:55.817 [2024-11-20 14:07:53.103385] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:55.817 [2024-11-20 14:07:53.103422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:55.817 [2024-11-20 14:07:53.103433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:55.817 [2024-11-20 14:07:53.103447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:46:55.817 [2024-11-20 14:07:53.103458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:55.817 [2024-11-20 14:07:53.103580] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 018e6bcd-70cc-43bb-a7ea-245d85b0b2ca 00:46:55.817 [2024-11-20 14:07:53.105021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:55.817 [2024-11-20 14:07:53.105199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:46:55.817 [2024-11-20 14:07:53.105221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:46:55.817 [2024-11-20 14:07:53.105239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:55.817 [2024-11-20 14:07:53.112605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:55.817 [2024-11-20 14:07:53.112644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:55.817 [2024-11-20 14:07:53.112657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.313 ms 00:46:55.817 [2024-11-20 14:07:53.112669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:55.817 [2024-11-20 14:07:53.112773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:55.817 [2024-11-20 14:07:53.112790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:55.817 [2024-11-20 14:07:53.112802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:46:55.817 [2024-11-20 14:07:53.112819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:55.817 [2024-11-20 14:07:53.112892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:55.817 [2024-11-20 14:07:53.112907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:55.817 [2024-11-20 14:07:53.112917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:46:55.817 [2024-11-20 14:07:53.112933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:55.817 [2024-11-20 14:07:53.112960] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:55.817 [2024-11-20 14:07:53.118453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:55.817 [2024-11-20 14:07:53.118499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:55.817 [2024-11-20 14:07:53.118517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.497 ms 00:46:55.817 [2024-11-20 14:07:53.118527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:55.817 [2024-11-20 14:07:53.118564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:55.817 [2024-11-20 14:07:53.118575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:55.817 [2024-11-20 14:07:53.118588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:46:55.817 [2024-11-20 14:07:53.118599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:55.817 [2024-11-20 14:07:53.118636] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:46:55.817 [2024-11-20 14:07:53.118767] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:46:55.817 [2024-11-20 14:07:53.118787] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:55.817 [2024-11-20 14:07:53.118801] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:46:55.817 [2024-11-20 14:07:53.118817] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:55.817 [2024-11-20 14:07:53.118829] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:55.817 [2024-11-20 14:07:53.118843] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:46:55.817 [2024-11-20 14:07:53.118853] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:55.817 [2024-11-20 14:07:53.118869] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:46:55.817 [2024-11-20 14:07:53.118879] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:46:55.817 [2024-11-20 14:07:53.118892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:55.817 [2024-11-20 14:07:53.118903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:55.817 [2024-11-20 14:07:53.118916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:46:55.817 [2024-11-20 14:07:53.118937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:55.817 [2024-11-20 14:07:53.119015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:55.817 [2024-11-20 14:07:53.119026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:55.817 [2024-11-20 14:07:53.119040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:46:55.817 [2024-11-20 14:07:53.119050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:55.817 [2024-11-20 14:07:53.119149] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:55.817 [2024-11-20 14:07:53.119163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:55.817 [2024-11-20 14:07:53.119176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:55.817 [2024-11-20 14:07:53.119187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:55.817 [2024-11-20 14:07:53.119200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:55.817 [2024-11-20 14:07:53.119209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:55.817 [2024-11-20 14:07:53.119221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:46:55.817 [2024-11-20 14:07:53.119231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:55.817 [2024-11-20 14:07:53.119243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:46:55.817 [2024-11-20 14:07:53.119253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:55.817 [2024-11-20 14:07:53.119265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:55.817 [2024-11-20 14:07:53.119275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:46:55.817 [2024-11-20 14:07:53.119287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:55.817 [2024-11-20 14:07:53.119297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:55.817 [2024-11-20 14:07:53.119309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:46:55.817 [2024-11-20 14:07:53.119320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:55.817 [2024-11-20 14:07:53.119334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:55.818 [2024-11-20 14:07:53.119344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:46:55.818 [2024-11-20 14:07:53.119357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:55.818 [2024-11-20 14:07:53.119367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:55.818 [2024-11-20 14:07:53.119379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:46:55.818 [2024-11-20 14:07:53.119388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:55.818 [2024-11-20 14:07:53.119400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:55.818 [2024-11-20 14:07:53.119409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:46:55.818 [2024-11-20 14:07:53.119421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:55.818 [2024-11-20 14:07:53.119430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:55.818 [2024-11-20 14:07:53.119442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:46:55.818 [2024-11-20 14:07:53.119451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:55.818 [2024-11-20 14:07:53.119463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:55.818 [2024-11-20 14:07:53.119474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:46:55.818 [2024-11-20 14:07:53.119505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:55.818 [2024-11-20 14:07:53.119515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:55.818 [2024-11-20 14:07:53.119529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:46:55.818 [2024-11-20 14:07:53.119539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:55.818 [2024-11-20 14:07:53.119551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:55.818 [2024-11-20 14:07:53.119561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:46:55.818 [2024-11-20 14:07:53.119573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:55.818 [2024-11-20 14:07:53.119582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:46:55.818 [2024-11-20 14:07:53.119611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:46:55.818 [2024-11-20 14:07:53.119621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:55.818 [2024-11-20 14:07:53.119633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:46:55.818 [2024-11-20 14:07:53.119642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:46:55.818 [2024-11-20 14:07:53.119654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:55.818 [2024-11-20 14:07:53.119663] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:55.818 [2024-11-20 14:07:53.119676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:55.818 [2024-11-20 14:07:53.119687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:55.818 [2024-11-20 14:07:53.119701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:55.818 [2024-11-20 14:07:53.119713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:55.818 [2024-11-20 14:07:53.119738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:55.818 [2024-11-20 14:07:53.119764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:55.818 [2024-11-20 14:07:53.119778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:55.818 [2024-11-20 14:07:53.119788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:55.818 [2024-11-20 14:07:53.119803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:55.818 [2024-11-20 14:07:53.119819] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:55.818 [2024-11-20 14:07:53.119836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:55.818 [2024-11-20 14:07:53.119851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:46:55.818 [2024-11-20 14:07:53.119866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:46:55.818 [2024-11-20 14:07:53.119878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:46:55.818 [2024-11-20 14:07:53.119892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:46:55.818 [2024-11-20 14:07:53.119903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:46:55.818 [2024-11-20 14:07:53.119917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:46:55.818 [2024-11-20 14:07:53.119929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:46:55.818 [2024-11-20 14:07:53.119943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:46:55.818 [2024-11-20 14:07:53.119954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:46:55.818 [2024-11-20 14:07:53.119971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:46:55.818 [2024-11-20 14:07:53.119982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:46:55.818 [2024-11-20 14:07:53.119996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:46:55.818 [2024-11-20 14:07:53.120008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:46:55.818 [2024-11-20 14:07:53.120023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:46:55.818 [2024-11-20 14:07:53.120035] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:55.818 [2024-11-20 14:07:53.120049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:55.818 [2024-11-20 14:07:53.120062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:55.818 [2024-11-20 14:07:53.120076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:55.818 [2024-11-20 14:07:53.120087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:55.818 [2024-11-20 14:07:53.120101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:55.818 [2024-11-20 14:07:53.120113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:55.818 [2024-11-20 14:07:53.120127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:55.818 [2024-11-20 14:07:53.120139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:46:55.818 [2024-11-20 14:07:53.120152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:55.818 [2024-11-20 14:07:53.120200] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:46:55.818 [2024-11-20 14:07:53.120219] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:46:59.109 [2024-11-20 14:07:56.323487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.109 [2024-11-20 14:07:56.323589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:46:59.109 [2024-11-20 14:07:56.323609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3203.265 ms 00:46:59.109 [2024-11-20 14:07:56.323625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.109 [2024-11-20 14:07:56.373611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.109 [2024-11-20 14:07:56.373676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:59.109 [2024-11-20 14:07:56.373694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.602 ms 00:46:59.109 [2024-11-20 14:07:56.373709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.109 [2024-11-20 14:07:56.373889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.109 [2024-11-20 14:07:56.373923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:59.109 [2024-11-20 14:07:56.373935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:46:59.109 [2024-11-20 14:07:56.373958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.368 [2024-11-20 14:07:56.431221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.368 [2024-11-20 14:07:56.431282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:59.368 [2024-11-20 14:07:56.431298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.190 ms 00:46:59.368 [2024-11-20 14:07:56.431313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.368 [2024-11-20 14:07:56.431369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.368 [2024-11-20 14:07:56.431392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:59.368 [2024-11-20 14:07:56.431404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:46:59.368 [2024-11-20 14:07:56.431419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.368 [2024-11-20 14:07:56.432306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.368 [2024-11-20 14:07:56.432342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:59.368 [2024-11-20 14:07:56.432354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:46:59.368 [2024-11-20 14:07:56.432369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.368 [2024-11-20 14:07:56.432504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.368 [2024-11-20 14:07:56.432521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:59.368 [2024-11-20 14:07:56.432537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:46:59.368 [2024-11-20 14:07:56.432556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.368 [2024-11-20 14:07:56.459782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.368 [2024-11-20 14:07:56.460053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:59.368 [2024-11-20 14:07:56.460079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.201 ms 00:46:59.368 [2024-11-20 14:07:56.460094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.368 [2024-11-20 14:07:56.482893] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:46:59.368 [2024-11-20 14:07:56.488317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.368 [2024-11-20 14:07:56.488351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:59.368 [2024-11-20 14:07:56.488371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.090 ms 00:46:59.368 [2024-11-20 14:07:56.488384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.368 [2024-11-20 14:07:56.573534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.368 [2024-11-20 14:07:56.573592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:46:59.368 [2024-11-20 14:07:56.573614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.097 ms 00:46:59.368 [2024-11-20 14:07:56.573626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.368 [2024-11-20 14:07:56.573842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.368 [2024-11-20 14:07:56.573861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:59.368 [2024-11-20 14:07:56.573882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:46:59.368 [2024-11-20 14:07:56.573893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.368 [2024-11-20 14:07:56.612097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.368 [2024-11-20 14:07:56.612142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:46:59.368 [2024-11-20 14:07:56.612161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.145 ms 00:46:59.369 [2024-11-20 14:07:56.612173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.369 [2024-11-20 14:07:56.648748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.369 [2024-11-20 14:07:56.648955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:46:59.369 [2024-11-20 14:07:56.648985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.521 ms 00:46:59.369 [2024-11-20 14:07:56.648996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.369 [2024-11-20 14:07:56.649751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.369 [2024-11-20 14:07:56.649773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:59.369 [2024-11-20 14:07:56.649789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:46:59.369 [2024-11-20 14:07:56.649804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.628 [2024-11-20 14:07:56.754544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.628 [2024-11-20 14:07:56.754593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:46:59.628 [2024-11-20 14:07:56.754618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.675 ms 00:46:59.628 [2024-11-20 14:07:56.754630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.628 [2024-11-20 14:07:56.794314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.628 [2024-11-20 14:07:56.794375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:46:59.628 [2024-11-20 14:07:56.794397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.589 ms 00:46:59.628 [2024-11-20 14:07:56.794409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.628 [2024-11-20 14:07:56.832928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.628 [2024-11-20 14:07:56.832969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:46:59.628 [2024-11-20 14:07:56.832988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.468 ms 00:46:59.628 [2024-11-20 14:07:56.832999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.628 [2024-11-20 14:07:56.870667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.628 [2024-11-20 14:07:56.870707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:59.628 [2024-11-20 14:07:56.870726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.617 ms 00:46:59.628 [2024-11-20 14:07:56.870738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.628 [2024-11-20 14:07:56.870791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.628 [2024-11-20 14:07:56.870804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:59.628 [2024-11-20 14:07:56.870824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:46:59.628 [2024-11-20 14:07:56.870835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.628 [2024-11-20 14:07:56.870957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:59.628 [2024-11-20 14:07:56.870972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:59.628 [2024-11-20 14:07:56.870991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:46:59.628 [2024-11-20 14:07:56.871002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:59.628 [2024-11-20 14:07:56.872609] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3769.982 ms, result 0 00:46:59.628 { 00:46:59.628 "name": "ftl0", 00:46:59.628 "uuid": "018e6bcd-70cc-43bb-a7ea-245d85b0b2ca" 00:46:59.628 } 00:46:59.628 14:07:56 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:46:59.628 14:07:56 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:46:59.887 14:07:57 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:46:59.887 14:07:57 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:47:00.146 [2024-11-20 14:07:57.443629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.146 [2024-11-20 14:07:57.443688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:00.146 [2024-11-20 14:07:57.443706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:47:00.147 [2024-11-20 14:07:57.443741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.147 [2024-11-20 14:07:57.443787] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:00.147 [2024-11-20 14:07:57.448756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.147 [2024-11-20 14:07:57.448965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:00.147 [2024-11-20 14:07:57.448995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.942 ms 00:47:00.147 [2024-11-20 14:07:57.449008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.147 [2024-11-20 14:07:57.449283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.147 [2024-11-20 14:07:57.449307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:00.147 [2024-11-20 14:07:57.449323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:47:00.147 [2024-11-20 14:07:57.449334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.147 [2024-11-20 14:07:57.452190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.147 [2024-11-20 14:07:57.452322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:00.147 [2024-11-20 14:07:57.452348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.834 ms 00:47:00.147 [2024-11-20 14:07:57.452360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.147 [2024-11-20 14:07:57.457585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.147 [2024-11-20 14:07:57.457615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:47:00.147 [2024-11-20 14:07:57.457635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.194 ms 00:47:00.147 [2024-11-20 14:07:57.457646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.407 [2024-11-20 14:07:57.496846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.407 [2024-11-20 14:07:57.497028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:00.407 [2024-11-20 14:07:57.497056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.124 ms 00:47:00.407 [2024-11-20 14:07:57.497067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.407 [2024-11-20 14:07:57.521158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.407 [2024-11-20 14:07:57.521214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:00.407 [2024-11-20 14:07:57.521234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.002 ms 00:47:00.407 [2024-11-20 14:07:57.521245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.407 [2024-11-20 14:07:57.521422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.407 [2024-11-20 14:07:57.521436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:00.407 [2024-11-20 14:07:57.521452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:47:00.407 [2024-11-20 14:07:57.521463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.407 [2024-11-20 14:07:57.559190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.407 [2024-11-20 14:07:57.559239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:47:00.407 [2024-11-20 14:07:57.559256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.676 ms 00:47:00.407 [2024-11-20 14:07:57.559276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.407 [2024-11-20 14:07:57.596639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.407 [2024-11-20 14:07:57.596798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:47:00.407 [2024-11-20 14:07:57.596825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.318 ms 00:47:00.407 [2024-11-20 14:07:57.596836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.407 [2024-11-20 14:07:57.634256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.407 [2024-11-20 14:07:57.634293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:00.407 [2024-11-20 14:07:57.634310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.370 ms 00:47:00.407 [2024-11-20 14:07:57.634320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.407 [2024-11-20 14:07:57.671703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.407 [2024-11-20 14:07:57.671762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:00.407 [2024-11-20 14:07:57.671781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.258 ms 00:47:00.407 [2024-11-20 14:07:57.671791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.407 [2024-11-20 14:07:57.671837] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:00.407 [2024-11-20 14:07:57.671856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:47:00.407 [2024-11-20 14:07:57.671874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:47:00.407 [2024-11-20 14:07:57.671887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:00.407 [2024-11-20 14:07:57.671903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:00.407 [2024-11-20 14:07:57.671914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:00.407 [2024-11-20 14:07:57.671929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:00.407 [2024-11-20 14:07:57.671940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:00.407 [2024-11-20 14:07:57.671959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:00.407 [2024-11-20 14:07:57.671971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:00.407 [2024-11-20 14:07:57.671986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:00.407 [2024-11-20 14:07:57.671997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:00.407 [2024-11-20 14:07:57.672013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:00.407 [2024-11-20 14:07:57.672024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:00.407 [2024-11-20 14:07:57.672039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:00.407 [2024-11-20 14:07:57.672051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.672988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:00.408 [2024-11-20 14:07:57.673284] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:00.408 [2024-11-20 14:07:57.673303] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 018e6bcd-70cc-43bb-a7ea-245d85b0b2ca 00:47:00.408 [2024-11-20 14:07:57.673314] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:47:00.408 [2024-11-20 14:07:57.673331] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:47:00.408 [2024-11-20 14:07:57.673342] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:47:00.409 [2024-11-20 14:07:57.673362] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:47:00.409 [2024-11-20 14:07:57.673373] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:00.409 [2024-11-20 14:07:57.673388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:00.409 [2024-11-20 14:07:57.673399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:00.409 [2024-11-20 14:07:57.673412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:00.409 [2024-11-20 14:07:57.673422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:00.409 [2024-11-20 14:07:57.673435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.409 [2024-11-20 14:07:57.673446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:00.409 [2024-11-20 14:07:57.673462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:47:00.409 [2024-11-20 14:07:57.673473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.409 [2024-11-20 14:07:57.695150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.409 [2024-11-20 14:07:57.695316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:00.409 [2024-11-20 14:07:57.695343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.597 ms 00:47:00.409 [2024-11-20 14:07:57.695355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.409 [2024-11-20 14:07:57.696044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:00.409 [2024-11-20 14:07:57.696061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:00.409 [2024-11-20 14:07:57.696081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:47:00.409 [2024-11-20 14:07:57.696092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.668 [2024-11-20 14:07:57.769383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:00.668 [2024-11-20 14:07:57.769426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:00.668 [2024-11-20 14:07:57.769444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:00.668 [2024-11-20 14:07:57.769454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.668 [2024-11-20 14:07:57.769551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:00.668 [2024-11-20 14:07:57.769564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:00.668 [2024-11-20 14:07:57.769583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:00.668 [2024-11-20 14:07:57.769614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.668 [2024-11-20 14:07:57.769747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:00.668 [2024-11-20 14:07:57.769762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:00.668 [2024-11-20 14:07:57.769776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:00.668 [2024-11-20 14:07:57.769786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.668 [2024-11-20 14:07:57.769814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:00.668 [2024-11-20 14:07:57.769825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:00.668 [2024-11-20 14:07:57.769839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:00.668 [2024-11-20 14:07:57.769849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.668 [2024-11-20 14:07:57.911446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:00.668 [2024-11-20 14:07:57.911524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:00.668 [2024-11-20 14:07:57.911547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:00.668 [2024-11-20 14:07:57.911559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.927 [2024-11-20 14:07:58.026197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:00.927 [2024-11-20 14:07:58.026269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:00.927 [2024-11-20 14:07:58.026291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:00.927 [2024-11-20 14:07:58.026307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.927 [2024-11-20 14:07:58.026472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:00.927 [2024-11-20 14:07:58.026506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:00.927 [2024-11-20 14:07:58.026522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:00.927 [2024-11-20 14:07:58.026533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.927 [2024-11-20 14:07:58.026622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:00.927 [2024-11-20 14:07:58.026635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:00.927 [2024-11-20 14:07:58.026650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:00.927 [2024-11-20 14:07:58.026661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.927 [2024-11-20 14:07:58.026801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:00.927 [2024-11-20 14:07:58.026816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:00.927 [2024-11-20 14:07:58.026831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:00.927 [2024-11-20 14:07:58.026842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.927 [2024-11-20 14:07:58.026893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:00.927 [2024-11-20 14:07:58.026907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:00.927 [2024-11-20 14:07:58.026921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:00.927 [2024-11-20 14:07:58.026933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.927 [2024-11-20 14:07:58.026991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:00.927 [2024-11-20 14:07:58.027004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:00.927 [2024-11-20 14:07:58.027019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:00.927 [2024-11-20 14:07:58.027030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.927 [2024-11-20 14:07:58.027094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:00.927 [2024-11-20 14:07:58.027107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:00.927 [2024-11-20 14:07:58.027121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:00.927 [2024-11-20 14:07:58.027133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:00.927 [2024-11-20 14:07:58.027334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 583.619 ms, result 0 00:47:00.927 true 00:47:00.927 14:07:58 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79587 00:47:00.927 14:07:58 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79587 ']' 00:47:00.927 14:07:58 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79587 00:47:00.927 14:07:58 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:47:00.927 14:07:58 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:00.927 14:07:58 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79587 00:47:00.927 14:07:58 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:00.927 14:07:58 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:00.928 14:07:58 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79587' 00:47:00.928 killing process with pid 79587 00:47:00.928 14:07:58 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79587 00:47:00.928 14:07:58 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79587 00:47:06.199 14:08:03 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:47:11.472 262144+0 records in 00:47:11.472 262144+0 records out 00:47:11.472 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.45404 s, 241 MB/s 00:47:11.472 14:08:07 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:47:12.418 14:08:09 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:47:12.418 [2024-11-20 14:08:09.711017] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:47:12.418 [2024-11-20 14:08:09.711216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79841 ] 00:47:12.676 [2024-11-20 14:08:09.926042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:12.935 [2024-11-20 14:08:10.087912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:13.193 [2024-11-20 14:08:10.470851] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:13.193 [2024-11-20 14:08:10.470920] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:13.452 [2024-11-20 14:08:10.636724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.452 [2024-11-20 14:08:10.636781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:47:13.452 [2024-11-20 14:08:10.636804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:47:13.452 [2024-11-20 14:08:10.636815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.452 [2024-11-20 14:08:10.636872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.452 [2024-11-20 14:08:10.636885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:13.452 [2024-11-20 14:08:10.636902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:47:13.452 [2024-11-20 14:08:10.636913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.452 [2024-11-20 14:08:10.636935] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:47:13.452 [2024-11-20 14:08:10.637904] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:47:13.452 [2024-11-20 14:08:10.637927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.452 [2024-11-20 14:08:10.637938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:13.452 [2024-11-20 14:08:10.637950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:47:13.452 [2024-11-20 14:08:10.637960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.452 [2024-11-20 14:08:10.639393] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:47:13.452 [2024-11-20 14:08:10.658726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.452 [2024-11-20 14:08:10.658917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:47:13.452 [2024-11-20 14:08:10.658941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.333 ms 00:47:13.452 [2024-11-20 14:08:10.658952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.452 [2024-11-20 14:08:10.659048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.452 [2024-11-20 14:08:10.659062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:47:13.452 [2024-11-20 14:08:10.659073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:47:13.452 [2024-11-20 14:08:10.659084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.452 [2024-11-20 14:08:10.665987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.452 [2024-11-20 14:08:10.666177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:13.452 [2024-11-20 14:08:10.666199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.824 ms 00:47:13.452 [2024-11-20 14:08:10.666217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.452 [2024-11-20 14:08:10.666304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.452 [2024-11-20 14:08:10.666317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:13.452 [2024-11-20 14:08:10.666329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:47:13.452 [2024-11-20 14:08:10.666339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.452 [2024-11-20 14:08:10.666388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.452 [2024-11-20 14:08:10.666400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:47:13.452 [2024-11-20 14:08:10.666411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:47:13.452 [2024-11-20 14:08:10.666421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.452 [2024-11-20 14:08:10.666451] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:47:13.452 [2024-11-20 14:08:10.671324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.452 [2024-11-20 14:08:10.671355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:13.452 [2024-11-20 14:08:10.671368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.883 ms 00:47:13.452 [2024-11-20 14:08:10.671381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.452 [2024-11-20 14:08:10.671411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.452 [2024-11-20 14:08:10.671422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:47:13.452 [2024-11-20 14:08:10.671432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:47:13.453 [2024-11-20 14:08:10.671442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.453 [2024-11-20 14:08:10.671525] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:47:13.453 [2024-11-20 14:08:10.671550] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:47:13.453 [2024-11-20 14:08:10.671586] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:47:13.453 [2024-11-20 14:08:10.671607] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:47:13.453 [2024-11-20 14:08:10.671698] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:47:13.453 [2024-11-20 14:08:10.671722] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:47:13.453 [2024-11-20 14:08:10.671736] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:47:13.453 [2024-11-20 14:08:10.671749] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:47:13.453 [2024-11-20 14:08:10.671762] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:47:13.453 [2024-11-20 14:08:10.671773] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:47:13.453 [2024-11-20 14:08:10.671783] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:47:13.453 [2024-11-20 14:08:10.671793] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:47:13.453 [2024-11-20 14:08:10.671807] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:47:13.453 [2024-11-20 14:08:10.671819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.453 [2024-11-20 14:08:10.671829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:47:13.453 [2024-11-20 14:08:10.671840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:47:13.453 [2024-11-20 14:08:10.671850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.453 [2024-11-20 14:08:10.671927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.453 [2024-11-20 14:08:10.671939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:47:13.453 [2024-11-20 14:08:10.671949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:47:13.453 [2024-11-20 14:08:10.671959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.453 [2024-11-20 14:08:10.672060] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:47:13.453 [2024-11-20 14:08:10.672074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:47:13.453 [2024-11-20 14:08:10.672085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:13.453 [2024-11-20 14:08:10.672096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:13.453 [2024-11-20 14:08:10.672107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:47:13.453 [2024-11-20 14:08:10.672117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:47:13.453 [2024-11-20 14:08:10.672126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:47:13.453 [2024-11-20 14:08:10.672136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:47:13.453 [2024-11-20 14:08:10.672145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:47:13.453 [2024-11-20 14:08:10.672155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:13.453 [2024-11-20 14:08:10.672166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:47:13.453 [2024-11-20 14:08:10.672176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:47:13.453 [2024-11-20 14:08:10.672185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:13.453 [2024-11-20 14:08:10.672195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:47:13.453 [2024-11-20 14:08:10.672205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:47:13.453 [2024-11-20 14:08:10.672223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:13.453 [2024-11-20 14:08:10.672234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:47:13.453 [2024-11-20 14:08:10.672243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:47:13.453 [2024-11-20 14:08:10.672253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:13.453 [2024-11-20 14:08:10.672262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:47:13.453 [2024-11-20 14:08:10.672272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:47:13.453 [2024-11-20 14:08:10.672281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:13.453 [2024-11-20 14:08:10.672296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:47:13.453 [2024-11-20 14:08:10.672306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:47:13.453 [2024-11-20 14:08:10.672315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:13.453 [2024-11-20 14:08:10.672325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:47:13.453 [2024-11-20 14:08:10.672334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:47:13.453 [2024-11-20 14:08:10.672343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:13.453 [2024-11-20 14:08:10.672352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:47:13.453 [2024-11-20 14:08:10.672362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:47:13.453 [2024-11-20 14:08:10.672371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:13.453 [2024-11-20 14:08:10.672381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:47:13.453 [2024-11-20 14:08:10.672390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:47:13.453 [2024-11-20 14:08:10.672399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:13.453 [2024-11-20 14:08:10.672409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:47:13.453 [2024-11-20 14:08:10.672418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:47:13.453 [2024-11-20 14:08:10.672427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:13.453 [2024-11-20 14:08:10.672436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:47:13.453 [2024-11-20 14:08:10.672445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:47:13.453 [2024-11-20 14:08:10.672454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:13.453 [2024-11-20 14:08:10.672463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:47:13.453 [2024-11-20 14:08:10.672473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:47:13.453 [2024-11-20 14:08:10.672495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:13.453 [2024-11-20 14:08:10.672505] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:47:13.453 [2024-11-20 14:08:10.672515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:47:13.453 [2024-11-20 14:08:10.672526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:13.453 [2024-11-20 14:08:10.672535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:13.453 [2024-11-20 14:08:10.672546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:47:13.453 [2024-11-20 14:08:10.672556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:47:13.453 [2024-11-20 14:08:10.672566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:47:13.453 [2024-11-20 14:08:10.672576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:47:13.453 [2024-11-20 14:08:10.672585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:47:13.453 [2024-11-20 14:08:10.672595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:47:13.453 [2024-11-20 14:08:10.672605] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:47:13.453 [2024-11-20 14:08:10.672617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:13.453 [2024-11-20 14:08:10.672629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:47:13.453 [2024-11-20 14:08:10.672640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:47:13.453 [2024-11-20 14:08:10.672651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:47:13.453 [2024-11-20 14:08:10.672661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:47:13.453 [2024-11-20 14:08:10.672671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:47:13.453 [2024-11-20 14:08:10.672682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:47:13.453 [2024-11-20 14:08:10.672693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:47:13.453 [2024-11-20 14:08:10.672703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:47:13.453 [2024-11-20 14:08:10.672713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:47:13.453 [2024-11-20 14:08:10.672724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:47:13.453 [2024-11-20 14:08:10.672735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:47:13.453 [2024-11-20 14:08:10.672745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:47:13.453 [2024-11-20 14:08:10.672755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:47:13.453 [2024-11-20 14:08:10.672766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:47:13.453 [2024-11-20 14:08:10.672777] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:47:13.453 [2024-11-20 14:08:10.672791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:13.453 [2024-11-20 14:08:10.672803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:13.453 [2024-11-20 14:08:10.672813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:47:13.453 [2024-11-20 14:08:10.672823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:47:13.453 [2024-11-20 14:08:10.672834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:47:13.454 [2024-11-20 14:08:10.672844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.454 [2024-11-20 14:08:10.672855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:47:13.454 [2024-11-20 14:08:10.672866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:47:13.454 [2024-11-20 14:08:10.672876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.454 [2024-11-20 14:08:10.716249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.454 [2024-11-20 14:08:10.716298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:13.454 [2024-11-20 14:08:10.716315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.322 ms 00:47:13.454 [2024-11-20 14:08:10.716326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.454 [2024-11-20 14:08:10.716432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.454 [2024-11-20 14:08:10.716444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:47:13.454 [2024-11-20 14:08:10.716455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:47:13.454 [2024-11-20 14:08:10.716466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.712 [2024-11-20 14:08:10.776009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.712 [2024-11-20 14:08:10.776229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:13.712 [2024-11-20 14:08:10.776253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.451 ms 00:47:13.712 [2024-11-20 14:08:10.776264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.712 [2024-11-20 14:08:10.776324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.712 [2024-11-20 14:08:10.776336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:13.712 [2024-11-20 14:08:10.776362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:47:13.712 [2024-11-20 14:08:10.776373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.712 [2024-11-20 14:08:10.776898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.712 [2024-11-20 14:08:10.776913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:13.712 [2024-11-20 14:08:10.776924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:47:13.712 [2024-11-20 14:08:10.776935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.712 [2024-11-20 14:08:10.777061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.712 [2024-11-20 14:08:10.777075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:13.712 [2024-11-20 14:08:10.777086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:47:13.712 [2024-11-20 14:08:10.777106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.712 [2024-11-20 14:08:10.796453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.712 [2024-11-20 14:08:10.796512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:13.712 [2024-11-20 14:08:10.796531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.326 ms 00:47:13.712 [2024-11-20 14:08:10.796542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.712 [2024-11-20 14:08:10.815755] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:47:13.712 [2024-11-20 14:08:10.815795] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:47:13.712 [2024-11-20 14:08:10.815811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.713 [2024-11-20 14:08:10.815822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:47:13.713 [2024-11-20 14:08:10.815833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.127 ms 00:47:13.713 [2024-11-20 14:08:10.815844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.713 [2024-11-20 14:08:10.845394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.713 [2024-11-20 14:08:10.845452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:47:13.713 [2024-11-20 14:08:10.845466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.506 ms 00:47:13.713 [2024-11-20 14:08:10.845492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.713 [2024-11-20 14:08:10.862998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.713 [2024-11-20 14:08:10.863043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:47:13.713 [2024-11-20 14:08:10.863055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.450 ms 00:47:13.713 [2024-11-20 14:08:10.863081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.713 [2024-11-20 14:08:10.880901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.713 [2024-11-20 14:08:10.880937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:47:13.713 [2024-11-20 14:08:10.880949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.782 ms 00:47:13.713 [2024-11-20 14:08:10.880975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.713 [2024-11-20 14:08:10.881726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.713 [2024-11-20 14:08:10.881749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:47:13.713 [2024-11-20 14:08:10.881762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:47:13.713 [2024-11-20 14:08:10.881772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.713 [2024-11-20 14:08:10.971732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.713 [2024-11-20 14:08:10.971795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:47:13.713 [2024-11-20 14:08:10.971813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.918 ms 00:47:13.713 [2024-11-20 14:08:10.971831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.713 [2024-11-20 14:08:10.982988] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:47:13.713 [2024-11-20 14:08:10.986187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.713 [2024-11-20 14:08:10.986221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:47:13.713 [2024-11-20 14:08:10.986237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.296 ms 00:47:13.713 [2024-11-20 14:08:10.986248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.713 [2024-11-20 14:08:10.986335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.713 [2024-11-20 14:08:10.986349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:47:13.713 [2024-11-20 14:08:10.986360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:47:13.713 [2024-11-20 14:08:10.986371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.713 [2024-11-20 14:08:10.986466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.713 [2024-11-20 14:08:10.986495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:47:13.713 [2024-11-20 14:08:10.986508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:47:13.713 [2024-11-20 14:08:10.986518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.713 [2024-11-20 14:08:10.986544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.713 [2024-11-20 14:08:10.986555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:47:13.713 [2024-11-20 14:08:10.986566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:47:13.713 [2024-11-20 14:08:10.986576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.713 [2024-11-20 14:08:10.986611] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:47:13.713 [2024-11-20 14:08:10.986623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.713 [2024-11-20 14:08:10.986637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:47:13.713 [2024-11-20 14:08:10.986647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:47:13.713 [2024-11-20 14:08:10.986657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.713 [2024-11-20 14:08:11.026407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.713 [2024-11-20 14:08:11.026450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:47:13.713 [2024-11-20 14:08:11.026482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.731 ms 00:47:13.713 [2024-11-20 14:08:11.026505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.713 [2024-11-20 14:08:11.026589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.713 [2024-11-20 14:08:11.026603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:47:13.713 [2024-11-20 14:08:11.026614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:47:13.713 [2024-11-20 14:08:11.026624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.713 [2024-11-20 14:08:11.027885] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 390.637 ms, result 0 00:47:15.090  [2024-11-20T14:08:13.350Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-20T14:08:14.291Z] Copying: 52/1024 [MB] (26 MBps) [2024-11-20T14:08:15.230Z] Copying: 80/1024 [MB] (27 MBps) [2024-11-20T14:08:16.167Z] Copying: 107/1024 [MB] (27 MBps) [2024-11-20T14:08:17.103Z] Copying: 134/1024 [MB] (26 MBps) [2024-11-20T14:08:18.479Z] Copying: 161/1024 [MB] (27 MBps) [2024-11-20T14:08:19.047Z] Copying: 188/1024 [MB] (27 MBps) [2024-11-20T14:08:20.437Z] Copying: 216/1024 [MB] (27 MBps) [2024-11-20T14:08:21.373Z] Copying: 242/1024 [MB] (26 MBps) [2024-11-20T14:08:22.309Z] Copying: 269/1024 [MB] (26 MBps) [2024-11-20T14:08:23.244Z] Copying: 295/1024 [MB] (25 MBps) [2024-11-20T14:08:24.180Z] Copying: 321/1024 [MB] (26 MBps) [2024-11-20T14:08:25.115Z] Copying: 348/1024 [MB] (26 MBps) [2024-11-20T14:08:26.051Z] Copying: 375/1024 [MB] (26 MBps) [2024-11-20T14:08:27.430Z] Copying: 402/1024 [MB] (27 MBps) [2024-11-20T14:08:28.048Z] Copying: 429/1024 [MB] (26 MBps) [2024-11-20T14:08:29.435Z] Copying: 456/1024 [MB] (27 MBps) [2024-11-20T14:08:30.371Z] Copying: 486/1024 [MB] (29 MBps) [2024-11-20T14:08:31.306Z] Copying: 515/1024 [MB] (28 MBps) [2024-11-20T14:08:32.242Z] Copying: 543/1024 [MB] (28 MBps) [2024-11-20T14:08:33.178Z] Copying: 572/1024 [MB] (28 MBps) [2024-11-20T14:08:34.115Z] Copying: 600/1024 [MB] (28 MBps) [2024-11-20T14:08:35.050Z] Copying: 629/1024 [MB] (28 MBps) [2024-11-20T14:08:36.453Z] Copying: 658/1024 [MB] (28 MBps) [2024-11-20T14:08:37.390Z] Copying: 685/1024 [MB] (27 MBps) [2024-11-20T14:08:38.327Z] Copying: 712/1024 [MB] (27 MBps) [2024-11-20T14:08:39.264Z] Copying: 740/1024 [MB] (27 MBps) [2024-11-20T14:08:40.199Z] Copying: 767/1024 [MB] (27 MBps) [2024-11-20T14:08:41.165Z] Copying: 795/1024 [MB] (27 MBps) [2024-11-20T14:08:42.102Z] Copying: 823/1024 [MB] (28 MBps) [2024-11-20T14:08:43.479Z] Copying: 850/1024 [MB] (27 MBps) [2024-11-20T14:08:44.048Z] Copying: 878/1024 [MB] (27 MBps) [2024-11-20T14:08:45.424Z] Copying: 905/1024 [MB] (27 MBps) [2024-11-20T14:08:46.361Z] Copying: 933/1024 [MB] (27 MBps) [2024-11-20T14:08:47.299Z] Copying: 961/1024 [MB] (28 MBps) [2024-11-20T14:08:48.235Z] Copying: 988/1024 [MB] (27 MBps) [2024-11-20T14:08:48.494Z] Copying: 1015/1024 [MB] (26 MBps) [2024-11-20T14:08:48.494Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-20 14:08:48.349115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.171 [2024-11-20 14:08:48.349166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:51.171 [2024-11-20 14:08:48.349182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:47:51.171 [2024-11-20 14:08:48.349193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.171 [2024-11-20 14:08:48.349215] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:51.171 [2024-11-20 14:08:48.353590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.171 [2024-11-20 14:08:48.353624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:51.171 [2024-11-20 14:08:48.353636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.359 ms 00:47:51.171 [2024-11-20 14:08:48.353652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.171 [2024-11-20 14:08:48.355531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.171 [2024-11-20 14:08:48.355570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:51.171 [2024-11-20 14:08:48.355582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.855 ms 00:47:51.171 [2024-11-20 14:08:48.355593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.171 [2024-11-20 14:08:48.371059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.171 [2024-11-20 14:08:48.371212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:51.171 [2024-11-20 14:08:48.371249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.450 ms 00:47:51.171 [2024-11-20 14:08:48.371260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.171 [2024-11-20 14:08:48.376319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.171 [2024-11-20 14:08:48.376352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:47:51.171 [2024-11-20 14:08:48.376365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.017 ms 00:47:51.171 [2024-11-20 14:08:48.376375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.171 [2024-11-20 14:08:48.413344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.171 [2024-11-20 14:08:48.413527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:51.171 [2024-11-20 14:08:48.413548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.911 ms 00:47:51.171 [2024-11-20 14:08:48.413559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.171 [2024-11-20 14:08:48.434847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.171 [2024-11-20 14:08:48.434886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:51.171 [2024-11-20 14:08:48.434900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.251 ms 00:47:51.171 [2024-11-20 14:08:48.434926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.171 [2024-11-20 14:08:48.435047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.171 [2024-11-20 14:08:48.435060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:51.171 [2024-11-20 14:08:48.435077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:47:51.171 [2024-11-20 14:08:48.435087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.171 [2024-11-20 14:08:48.473436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.171 [2024-11-20 14:08:48.473507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:47:51.171 [2024-11-20 14:08:48.473524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.329 ms 00:47:51.171 [2024-11-20 14:08:48.473536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.432 [2024-11-20 14:08:48.511675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.432 [2024-11-20 14:08:48.511844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:47:51.432 [2024-11-20 14:08:48.511898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.089 ms 00:47:51.432 [2024-11-20 14:08:48.511910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.432 [2024-11-20 14:08:48.548697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.432 [2024-11-20 14:08:48.548737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:51.432 [2024-11-20 14:08:48.548751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.715 ms 00:47:51.432 [2024-11-20 14:08:48.548761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.432 [2024-11-20 14:08:48.586557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.432 [2024-11-20 14:08:48.586597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:51.432 [2024-11-20 14:08:48.586612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.715 ms 00:47:51.432 [2024-11-20 14:08:48.586622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.432 [2024-11-20 14:08:48.586661] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:51.432 [2024-11-20 14:08:48.586680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.586991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:51.432 [2024-11-20 14:08:48.587461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:51.433 [2024-11-20 14:08:48.587821] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:51.433 [2024-11-20 14:08:48.587838] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 018e6bcd-70cc-43bb-a7ea-245d85b0b2ca 00:47:51.433 [2024-11-20 14:08:48.587852] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:47:51.433 [2024-11-20 14:08:48.587862] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:47:51.433 [2024-11-20 14:08:48.587872] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:47:51.433 [2024-11-20 14:08:48.587882] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:47:51.433 [2024-11-20 14:08:48.587892] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:51.433 [2024-11-20 14:08:48.587902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:51.433 [2024-11-20 14:08:48.587912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:51.433 [2024-11-20 14:08:48.587932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:51.433 [2024-11-20 14:08:48.587942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:51.433 [2024-11-20 14:08:48.587956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.433 [2024-11-20 14:08:48.587966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:51.433 [2024-11-20 14:08:48.587978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:47:51.433 [2024-11-20 14:08:48.587987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.433 [2024-11-20 14:08:48.608525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.433 [2024-11-20 14:08:48.608565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:51.433 [2024-11-20 14:08:48.608579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.501 ms 00:47:51.433 [2024-11-20 14:08:48.608589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.433 [2024-11-20 14:08:48.609177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:51.433 [2024-11-20 14:08:48.609199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:51.433 [2024-11-20 14:08:48.609211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:47:51.433 [2024-11-20 14:08:48.609223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.433 [2024-11-20 14:08:48.663578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:51.433 [2024-11-20 14:08:48.663632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:51.433 [2024-11-20 14:08:48.663647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:51.433 [2024-11-20 14:08:48.663674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.433 [2024-11-20 14:08:48.663750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:51.433 [2024-11-20 14:08:48.663761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:51.433 [2024-11-20 14:08:48.663772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:51.433 [2024-11-20 14:08:48.663782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.433 [2024-11-20 14:08:48.663890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:51.433 [2024-11-20 14:08:48.663904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:51.433 [2024-11-20 14:08:48.663916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:51.433 [2024-11-20 14:08:48.663926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.433 [2024-11-20 14:08:48.663944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:51.433 [2024-11-20 14:08:48.663954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:51.433 [2024-11-20 14:08:48.663964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:51.433 [2024-11-20 14:08:48.663974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.693 [2024-11-20 14:08:48.791968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:51.693 [2024-11-20 14:08:48.792023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:51.693 [2024-11-20 14:08:48.792039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:51.693 [2024-11-20 14:08:48.792050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.693 [2024-11-20 14:08:48.892989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:51.693 [2024-11-20 14:08:48.893050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:51.693 [2024-11-20 14:08:48.893066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:51.693 [2024-11-20 14:08:48.893076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.693 [2024-11-20 14:08:48.893171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:51.693 [2024-11-20 14:08:48.893183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:51.693 [2024-11-20 14:08:48.893193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:51.693 [2024-11-20 14:08:48.893203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.693 [2024-11-20 14:08:48.893245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:51.693 [2024-11-20 14:08:48.893256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:51.693 [2024-11-20 14:08:48.893265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:51.693 [2024-11-20 14:08:48.893275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.693 [2024-11-20 14:08:48.893398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:51.693 [2024-11-20 14:08:48.893415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:51.693 [2024-11-20 14:08:48.893425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:51.693 [2024-11-20 14:08:48.893435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.693 [2024-11-20 14:08:48.893469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:51.693 [2024-11-20 14:08:48.893514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:51.693 [2024-11-20 14:08:48.893525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:51.693 [2024-11-20 14:08:48.893552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.693 [2024-11-20 14:08:48.893588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:51.693 [2024-11-20 14:08:48.893604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:51.693 [2024-11-20 14:08:48.893615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:51.693 [2024-11-20 14:08:48.893625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.693 [2024-11-20 14:08:48.893682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:51.693 [2024-11-20 14:08:48.893695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:51.693 [2024-11-20 14:08:48.893705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:51.693 [2024-11-20 14:08:48.893715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:51.693 [2024-11-20 14:08:48.893840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 544.684 ms, result 0 00:47:53.597 00:47:53.597 00:47:53.597 14:08:50 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:47:53.597 [2024-11-20 14:08:50.556711] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:47:53.597 [2024-11-20 14:08:50.556882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80248 ] 00:47:53.597 [2024-11-20 14:08:50.730473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:53.597 [2024-11-20 14:08:50.845249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:54.167 [2024-11-20 14:08:51.217051] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:54.167 [2024-11-20 14:08:51.217276] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:54.167 [2024-11-20 14:08:51.378325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.168 [2024-11-20 14:08:51.378380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:47:54.168 [2024-11-20 14:08:51.378400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:47:54.168 [2024-11-20 14:08:51.378411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.168 [2024-11-20 14:08:51.378457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.168 [2024-11-20 14:08:51.378470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:54.168 [2024-11-20 14:08:51.378518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:47:54.168 [2024-11-20 14:08:51.378529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.168 [2024-11-20 14:08:51.378551] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:47:54.168 [2024-11-20 14:08:51.379524] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:47:54.168 [2024-11-20 14:08:51.379551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.168 [2024-11-20 14:08:51.379562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:54.168 [2024-11-20 14:08:51.379573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:47:54.168 [2024-11-20 14:08:51.379583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.168 [2024-11-20 14:08:51.381039] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:47:54.168 [2024-11-20 14:08:51.400322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.168 [2024-11-20 14:08:51.400376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:47:54.168 [2024-11-20 14:08:51.400392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.283 ms 00:47:54.168 [2024-11-20 14:08:51.400402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.168 [2024-11-20 14:08:51.400471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.168 [2024-11-20 14:08:51.400527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:47:54.168 [2024-11-20 14:08:51.400539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:47:54.168 [2024-11-20 14:08:51.400549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.168 [2024-11-20 14:08:51.407426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.168 [2024-11-20 14:08:51.407457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:54.168 [2024-11-20 14:08:51.407469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.804 ms 00:47:54.168 [2024-11-20 14:08:51.407506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.168 [2024-11-20 14:08:51.407584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.168 [2024-11-20 14:08:51.407597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:54.168 [2024-11-20 14:08:51.407608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:47:54.168 [2024-11-20 14:08:51.407618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.168 [2024-11-20 14:08:51.407658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.168 [2024-11-20 14:08:51.407669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:47:54.168 [2024-11-20 14:08:51.407679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:47:54.168 [2024-11-20 14:08:51.407697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.168 [2024-11-20 14:08:51.407727] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:47:54.168 [2024-11-20 14:08:51.412652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.168 [2024-11-20 14:08:51.412685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:54.168 [2024-11-20 14:08:51.412697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.935 ms 00:47:54.168 [2024-11-20 14:08:51.412711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.168 [2024-11-20 14:08:51.412740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.168 [2024-11-20 14:08:51.412751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:47:54.168 [2024-11-20 14:08:51.412762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:47:54.168 [2024-11-20 14:08:51.412772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.168 [2024-11-20 14:08:51.412826] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:47:54.168 [2024-11-20 14:08:51.412850] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:47:54.168 [2024-11-20 14:08:51.412886] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:47:54.168 [2024-11-20 14:08:51.412907] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:47:54.168 [2024-11-20 14:08:51.412998] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:47:54.168 [2024-11-20 14:08:51.413011] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:47:54.168 [2024-11-20 14:08:51.413024] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:47:54.168 [2024-11-20 14:08:51.413037] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:47:54.168 [2024-11-20 14:08:51.413050] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:47:54.168 [2024-11-20 14:08:51.413060] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:47:54.168 [2024-11-20 14:08:51.413070] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:47:54.168 [2024-11-20 14:08:51.413080] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:47:54.168 [2024-11-20 14:08:51.413093] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:47:54.168 [2024-11-20 14:08:51.413104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.168 [2024-11-20 14:08:51.413114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:47:54.168 [2024-11-20 14:08:51.413123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:47:54.168 [2024-11-20 14:08:51.413133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.168 [2024-11-20 14:08:51.413204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.168 [2024-11-20 14:08:51.413215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:47:54.168 [2024-11-20 14:08:51.413225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:47:54.168 [2024-11-20 14:08:51.413235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.168 [2024-11-20 14:08:51.413332] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:47:54.168 [2024-11-20 14:08:51.413347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:47:54.168 [2024-11-20 14:08:51.413358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:54.168 [2024-11-20 14:08:51.413368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:54.168 [2024-11-20 14:08:51.413378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:47:54.168 [2024-11-20 14:08:51.413387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:47:54.168 [2024-11-20 14:08:51.413396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:47:54.168 [2024-11-20 14:08:51.413407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:47:54.168 [2024-11-20 14:08:51.413416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:47:54.168 [2024-11-20 14:08:51.413425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:54.168 [2024-11-20 14:08:51.413436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:47:54.168 [2024-11-20 14:08:51.413445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:47:54.168 [2024-11-20 14:08:51.413454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:54.168 [2024-11-20 14:08:51.413463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:47:54.168 [2024-11-20 14:08:51.413473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:47:54.168 [2024-11-20 14:08:51.413530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:54.168 [2024-11-20 14:08:51.413540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:47:54.168 [2024-11-20 14:08:51.413549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:47:54.168 [2024-11-20 14:08:51.413558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:54.168 [2024-11-20 14:08:51.413568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:47:54.168 [2024-11-20 14:08:51.413577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:47:54.168 [2024-11-20 14:08:51.413586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:54.168 [2024-11-20 14:08:51.413596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:47:54.168 [2024-11-20 14:08:51.413616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:47:54.168 [2024-11-20 14:08:51.413625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:54.168 [2024-11-20 14:08:51.413650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:47:54.168 [2024-11-20 14:08:51.413660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:47:54.168 [2024-11-20 14:08:51.413669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:54.168 [2024-11-20 14:08:51.413678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:47:54.168 [2024-11-20 14:08:51.413688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:47:54.168 [2024-11-20 14:08:51.413697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:54.168 [2024-11-20 14:08:51.413707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:47:54.168 [2024-11-20 14:08:51.413716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:47:54.168 [2024-11-20 14:08:51.413724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:54.168 [2024-11-20 14:08:51.413734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:47:54.168 [2024-11-20 14:08:51.413743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:47:54.168 [2024-11-20 14:08:51.413752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:54.168 [2024-11-20 14:08:51.413761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:47:54.168 [2024-11-20 14:08:51.413770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:47:54.168 [2024-11-20 14:08:51.413779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:54.169 [2024-11-20 14:08:51.413789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:47:54.169 [2024-11-20 14:08:51.413798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:47:54.169 [2024-11-20 14:08:51.413807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:54.169 [2024-11-20 14:08:51.413816] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:47:54.169 [2024-11-20 14:08:51.413826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:47:54.169 [2024-11-20 14:08:51.413836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:54.169 [2024-11-20 14:08:51.413845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:54.169 [2024-11-20 14:08:51.413855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:47:54.169 [2024-11-20 14:08:51.413865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:47:54.169 [2024-11-20 14:08:51.413873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:47:54.169 [2024-11-20 14:08:51.413883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:47:54.169 [2024-11-20 14:08:51.413892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:47:54.169 [2024-11-20 14:08:51.413901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:47:54.169 [2024-11-20 14:08:51.413912] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:47:54.169 [2024-11-20 14:08:51.413924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:54.169 [2024-11-20 14:08:51.413936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:47:54.169 [2024-11-20 14:08:51.413946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:47:54.169 [2024-11-20 14:08:51.413957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:47:54.169 [2024-11-20 14:08:51.413967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:47:54.169 [2024-11-20 14:08:51.413978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:47:54.169 [2024-11-20 14:08:51.413988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:47:54.169 [2024-11-20 14:08:51.413998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:47:54.169 [2024-11-20 14:08:51.414009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:47:54.169 [2024-11-20 14:08:51.414019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:47:54.169 [2024-11-20 14:08:51.414029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:47:54.169 [2024-11-20 14:08:51.414039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:47:54.169 [2024-11-20 14:08:51.414050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:47:54.169 [2024-11-20 14:08:51.414060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:47:54.169 [2024-11-20 14:08:51.414071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:47:54.169 [2024-11-20 14:08:51.414081] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:47:54.169 [2024-11-20 14:08:51.414096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:54.169 [2024-11-20 14:08:51.414108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:54.169 [2024-11-20 14:08:51.414119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:47:54.169 [2024-11-20 14:08:51.414130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:47:54.169 [2024-11-20 14:08:51.414140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:47:54.169 [2024-11-20 14:08:51.414152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.169 [2024-11-20 14:08:51.414162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:47:54.169 [2024-11-20 14:08:51.414172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:47:54.169 [2024-11-20 14:08:51.414182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.169 [2024-11-20 14:08:51.452554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.169 [2024-11-20 14:08:51.452593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:54.169 [2024-11-20 14:08:51.452608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.325 ms 00:47:54.169 [2024-11-20 14:08:51.452619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.169 [2024-11-20 14:08:51.452711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.169 [2024-11-20 14:08:51.452723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:47:54.169 [2024-11-20 14:08:51.452734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:47:54.169 [2024-11-20 14:08:51.452743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.508633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.508673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:54.429 [2024-11-20 14:08:51.508686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.829 ms 00:47:54.429 [2024-11-20 14:08:51.508697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.508737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.508748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:54.429 [2024-11-20 14:08:51.508762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:47:54.429 [2024-11-20 14:08:51.508772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.509263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.509277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:54.429 [2024-11-20 14:08:51.509289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:47:54.429 [2024-11-20 14:08:51.509298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.509409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.509423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:54.429 [2024-11-20 14:08:51.509433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:47:54.429 [2024-11-20 14:08:51.509448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.528526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.528564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:54.429 [2024-11-20 14:08:51.528582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.057 ms 00:47:54.429 [2024-11-20 14:08:51.528592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.547981] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:47:54.429 [2024-11-20 14:08:51.548163] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:47:54.429 [2024-11-20 14:08:51.548185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.548196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:47:54.429 [2024-11-20 14:08:51.548208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.483 ms 00:47:54.429 [2024-11-20 14:08:51.548219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.577125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.577163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:47:54.429 [2024-11-20 14:08:51.577177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.866 ms 00:47:54.429 [2024-11-20 14:08:51.577203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.595342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.595509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:47:54.429 [2024-11-20 14:08:51.595546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.091 ms 00:47:54.429 [2024-11-20 14:08:51.595558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.613577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.613612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:47:54.429 [2024-11-20 14:08:51.613625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.967 ms 00:47:54.429 [2024-11-20 14:08:51.613649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.614395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.614418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:47:54.429 [2024-11-20 14:08:51.614429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.634 ms 00:47:54.429 [2024-11-20 14:08:51.614442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.702001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.702250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:47:54.429 [2024-11-20 14:08:51.702299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.520 ms 00:47:54.429 [2024-11-20 14:08:51.702311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.713171] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:47:54.429 [2024-11-20 14:08:51.716331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.716363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:47:54.429 [2024-11-20 14:08:51.716378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.955 ms 00:47:54.429 [2024-11-20 14:08:51.716388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.716501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.716533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:47:54.429 [2024-11-20 14:08:51.716545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:47:54.429 [2024-11-20 14:08:51.716559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.716652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.716665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:47:54.429 [2024-11-20 14:08:51.716676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:47:54.429 [2024-11-20 14:08:51.716686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.429 [2024-11-20 14:08:51.716711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.429 [2024-11-20 14:08:51.716723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:47:54.429 [2024-11-20 14:08:51.716734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:47:54.429 [2024-11-20 14:08:51.716743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.430 [2024-11-20 14:08:51.716779] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:47:54.430 [2024-11-20 14:08:51.716791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.430 [2024-11-20 14:08:51.716802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:47:54.430 [2024-11-20 14:08:51.716812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:47:54.430 [2024-11-20 14:08:51.716822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.689 [2024-11-20 14:08:51.752978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.689 [2024-11-20 14:08:51.753016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:47:54.689 [2024-11-20 14:08:51.753029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.134 ms 00:47:54.689 [2024-11-20 14:08:51.753061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.689 [2024-11-20 14:08:51.753135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:54.689 [2024-11-20 14:08:51.753148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:47:54.689 [2024-11-20 14:08:51.753158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:47:54.689 [2024-11-20 14:08:51.753168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:54.689 [2024-11-20 14:08:51.754251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 375.494 ms, result 0 00:47:56.068  [2024-11-20T14:08:53.980Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-20T14:08:55.356Z] Copying: 58/1024 [MB] (29 MBps) [2024-11-20T14:08:56.292Z] Copying: 87/1024 [MB] (29 MBps) [2024-11-20T14:08:57.230Z] Copying: 115/1024 [MB] (27 MBps) [2024-11-20T14:08:58.166Z] Copying: 144/1024 [MB] (29 MBps) [2024-11-20T14:08:59.099Z] Copying: 174/1024 [MB] (29 MBps) [2024-11-20T14:09:00.033Z] Copying: 203/1024 [MB] (29 MBps) [2024-11-20T14:09:01.407Z] Copying: 233/1024 [MB] (29 MBps) [2024-11-20T14:09:01.974Z] Copying: 262/1024 [MB] (29 MBps) [2024-11-20T14:09:03.424Z] Copying: 292/1024 [MB] (29 MBps) [2024-11-20T14:09:03.990Z] Copying: 321/1024 [MB] (29 MBps) [2024-11-20T14:09:05.363Z] Copying: 351/1024 [MB] (29 MBps) [2024-11-20T14:09:06.300Z] Copying: 381/1024 [MB] (29 MBps) [2024-11-20T14:09:07.234Z] Copying: 410/1024 [MB] (29 MBps) [2024-11-20T14:09:08.169Z] Copying: 440/1024 [MB] (29 MBps) [2024-11-20T14:09:09.103Z] Copying: 469/1024 [MB] (28 MBps) [2024-11-20T14:09:10.040Z] Copying: 499/1024 [MB] (29 MBps) [2024-11-20T14:09:10.975Z] Copying: 528/1024 [MB] (29 MBps) [2024-11-20T14:09:12.356Z] Copying: 558/1024 [MB] (29 MBps) [2024-11-20T14:09:13.295Z] Copying: 587/1024 [MB] (28 MBps) [2024-11-20T14:09:14.232Z] Copying: 612/1024 [MB] (25 MBps) [2024-11-20T14:09:15.169Z] Copying: 640/1024 [MB] (28 MBps) [2024-11-20T14:09:16.106Z] Copying: 669/1024 [MB] (28 MBps) [2024-11-20T14:09:17.044Z] Copying: 697/1024 [MB] (27 MBps) [2024-11-20T14:09:17.982Z] Copying: 725/1024 [MB] (28 MBps) [2024-11-20T14:09:19.370Z] Copying: 755/1024 [MB] (29 MBps) [2024-11-20T14:09:20.305Z] Copying: 781/1024 [MB] (26 MBps) [2024-11-20T14:09:21.239Z] Copying: 808/1024 [MB] (27 MBps) [2024-11-20T14:09:22.174Z] Copying: 836/1024 [MB] (27 MBps) [2024-11-20T14:09:23.113Z] Copying: 863/1024 [MB] (27 MBps) [2024-11-20T14:09:24.051Z] Copying: 890/1024 [MB] (26 MBps) [2024-11-20T14:09:24.988Z] Copying: 916/1024 [MB] (26 MBps) [2024-11-20T14:09:26.366Z] Copying: 944/1024 [MB] (27 MBps) [2024-11-20T14:09:27.304Z] Copying: 970/1024 [MB] (26 MBps) [2024-11-20T14:09:28.241Z] Copying: 997/1024 [MB] (26 MBps) [2024-11-20T14:09:28.817Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-20 14:09:28.717473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.494 [2024-11-20 14:09:28.717913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:48:31.494 [2024-11-20 14:09:28.718214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:48:31.494 [2024-11-20 14:09:28.718443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.494 [2024-11-20 14:09:28.718745] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:48:31.494 [2024-11-20 14:09:28.726429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.494 [2024-11-20 14:09:28.726609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:48:31.494 [2024-11-20 14:09:28.726749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.977 ms 00:48:31.494 [2024-11-20 14:09:28.726798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.494 [2024-11-20 14:09:28.727035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.494 [2024-11-20 14:09:28.727053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:48:31.494 [2024-11-20 14:09:28.727067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:48:31.494 [2024-11-20 14:09:28.727080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.494 [2024-11-20 14:09:28.730148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.494 [2024-11-20 14:09:28.730203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:48:31.494 [2024-11-20 14:09:28.730219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.048 ms 00:48:31.494 [2024-11-20 14:09:28.730231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.494 [2024-11-20 14:09:28.736392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.494 [2024-11-20 14:09:28.736636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:48:31.494 [2024-11-20 14:09:28.736679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.125 ms 00:48:31.494 [2024-11-20 14:09:28.736704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.494 [2024-11-20 14:09:28.775885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.494 [2024-11-20 14:09:28.775948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:48:31.494 [2024-11-20 14:09:28.775983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.049 ms 00:48:31.494 [2024-11-20 14:09:28.775997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.494 [2024-11-20 14:09:28.796875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.494 [2024-11-20 14:09:28.796933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:48:31.494 [2024-11-20 14:09:28.796968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.822 ms 00:48:31.494 [2024-11-20 14:09:28.796981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.494 [2024-11-20 14:09:28.797137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.494 [2024-11-20 14:09:28.797162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:48:31.494 [2024-11-20 14:09:28.797175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:48:31.494 [2024-11-20 14:09:28.797187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.770 [2024-11-20 14:09:28.833737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.770 [2024-11-20 14:09:28.833804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:48:31.770 [2024-11-20 14:09:28.833822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.529 ms 00:48:31.770 [2024-11-20 14:09:28.833833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.770 [2024-11-20 14:09:28.870069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.771 [2024-11-20 14:09:28.870128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:48:31.771 [2024-11-20 14:09:28.870159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.188 ms 00:48:31.771 [2024-11-20 14:09:28.870172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.771 [2024-11-20 14:09:28.906208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.771 [2024-11-20 14:09:28.906250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:48:31.771 [2024-11-20 14:09:28.906265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.991 ms 00:48:31.771 [2024-11-20 14:09:28.906277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.771 [2024-11-20 14:09:28.942189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.771 [2024-11-20 14:09:28.942249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:48:31.771 [2024-11-20 14:09:28.942265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.824 ms 00:48:31.771 [2024-11-20 14:09:28.942277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.771 [2024-11-20 14:09:28.942320] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:48:31.771 [2024-11-20 14:09:28.942340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.942996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.943007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.943019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.943033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.943046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.943058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.943071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.943082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.943095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.943107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.943119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.943132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.943145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.943157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:48:31.771 [2024-11-20 14:09:28.943169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:48:31.772 [2024-11-20 14:09:28.943660] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:48:31.772 [2024-11-20 14:09:28.943686] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 018e6bcd-70cc-43bb-a7ea-245d85b0b2ca 00:48:31.772 [2024-11-20 14:09:28.943699] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:48:31.772 [2024-11-20 14:09:28.943710] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:48:31.772 [2024-11-20 14:09:28.943722] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:48:31.772 [2024-11-20 14:09:28.943734] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:48:31.772 [2024-11-20 14:09:28.943746] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:48:31.772 [2024-11-20 14:09:28.943758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:48:31.772 [2024-11-20 14:09:28.943782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:48:31.772 [2024-11-20 14:09:28.943793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:48:31.772 [2024-11-20 14:09:28.943804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:48:31.772 [2024-11-20 14:09:28.943816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.772 [2024-11-20 14:09:28.943828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:48:31.772 [2024-11-20 14:09:28.943840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.498 ms 00:48:31.772 [2024-11-20 14:09:28.943852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.772 [2024-11-20 14:09:28.964325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.772 [2024-11-20 14:09:28.964519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:48:31.772 [2024-11-20 14:09:28.964544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.426 ms 00:48:31.772 [2024-11-20 14:09:28.964557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.772 [2024-11-20 14:09:28.965093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:31.772 [2024-11-20 14:09:28.965114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:48:31.772 [2024-11-20 14:09:28.965128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:48:31.772 [2024-11-20 14:09:28.965149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.772 [2024-11-20 14:09:29.018439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:31.772 [2024-11-20 14:09:29.018640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:48:31.772 [2024-11-20 14:09:29.018666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:31.772 [2024-11-20 14:09:29.018679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.772 [2024-11-20 14:09:29.018741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:31.772 [2024-11-20 14:09:29.018754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:48:31.772 [2024-11-20 14:09:29.018767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:31.772 [2024-11-20 14:09:29.018787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.772 [2024-11-20 14:09:29.018872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:31.772 [2024-11-20 14:09:29.018888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:48:31.772 [2024-11-20 14:09:29.018900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:31.772 [2024-11-20 14:09:29.018913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:31.772 [2024-11-20 14:09:29.018934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:31.772 [2024-11-20 14:09:29.018947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:48:31.772 [2024-11-20 14:09:29.018960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:31.772 [2024-11-20 14:09:29.018972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:32.034 [2024-11-20 14:09:29.141592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:32.035 [2024-11-20 14:09:29.141868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:48:32.035 [2024-11-20 14:09:29.141895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:32.035 [2024-11-20 14:09:29.141908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:32.035 [2024-11-20 14:09:29.243070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:32.035 [2024-11-20 14:09:29.243123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:48:32.035 [2024-11-20 14:09:29.243140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:32.035 [2024-11-20 14:09:29.243160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:32.035 [2024-11-20 14:09:29.243265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:32.035 [2024-11-20 14:09:29.243279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:48:32.035 [2024-11-20 14:09:29.243292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:32.035 [2024-11-20 14:09:29.243304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:32.035 [2024-11-20 14:09:29.243354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:32.035 [2024-11-20 14:09:29.243367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:48:32.035 [2024-11-20 14:09:29.243378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:32.035 [2024-11-20 14:09:29.243390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:32.035 [2024-11-20 14:09:29.243548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:32.035 [2024-11-20 14:09:29.243565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:48:32.035 [2024-11-20 14:09:29.243578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:32.035 [2024-11-20 14:09:29.243590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:32.035 [2024-11-20 14:09:29.243637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:32.035 [2024-11-20 14:09:29.243652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:48:32.035 [2024-11-20 14:09:29.243674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:32.035 [2024-11-20 14:09:29.243686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:32.035 [2024-11-20 14:09:29.243735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:32.035 [2024-11-20 14:09:29.243749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:48:32.036 [2024-11-20 14:09:29.243762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:32.036 [2024-11-20 14:09:29.243774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:32.036 [2024-11-20 14:09:29.243822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:32.036 [2024-11-20 14:09:29.243836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:48:32.036 [2024-11-20 14:09:29.243848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:32.036 [2024-11-20 14:09:29.243860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:32.036 [2024-11-20 14:09:29.243995] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 526.499 ms, result 0 00:48:32.975 00:48:32.975 00:48:33.234 14:09:30 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:48:35.139 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:48:35.139 14:09:32 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:48:35.139 [2024-11-20 14:09:32.218065] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:48:35.139 [2024-11-20 14:09:32.218382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80665 ] 00:48:35.139 [2024-11-20 14:09:32.399596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:35.399 [2024-11-20 14:09:32.555219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:35.659 [2024-11-20 14:09:32.913416] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:48:35.659 [2024-11-20 14:09:32.913510] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:48:35.920 [2024-11-20 14:09:33.075330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.920 [2024-11-20 14:09:33.075593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:48:35.920 [2024-11-20 14:09:33.075628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:48:35.920 [2024-11-20 14:09:33.075642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.920 [2024-11-20 14:09:33.075718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.920 [2024-11-20 14:09:33.075733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:48:35.920 [2024-11-20 14:09:33.075751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:48:35.920 [2024-11-20 14:09:33.075763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.920 [2024-11-20 14:09:33.075791] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:48:35.920 [2024-11-20 14:09:33.076870] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:48:35.920 [2024-11-20 14:09:33.076902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.920 [2024-11-20 14:09:33.076915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:48:35.920 [2024-11-20 14:09:33.076928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.117 ms 00:48:35.920 [2024-11-20 14:09:33.076940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.920 [2024-11-20 14:09:33.078421] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:48:35.920 [2024-11-20 14:09:33.097179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.920 [2024-11-20 14:09:33.097241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:48:35.920 [2024-11-20 14:09:33.097258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.759 ms 00:48:35.920 [2024-11-20 14:09:33.097271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.920 [2024-11-20 14:09:33.097343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.920 [2024-11-20 14:09:33.097357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:48:35.920 [2024-11-20 14:09:33.097369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:48:35.920 [2024-11-20 14:09:33.097380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.920 [2024-11-20 14:09:33.104247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.920 [2024-11-20 14:09:33.104281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:48:35.920 [2024-11-20 14:09:33.104294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.789 ms 00:48:35.920 [2024-11-20 14:09:33.104327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.920 [2024-11-20 14:09:33.104409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.920 [2024-11-20 14:09:33.104425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:48:35.920 [2024-11-20 14:09:33.104438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:48:35.921 [2024-11-20 14:09:33.104449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.921 [2024-11-20 14:09:33.104510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.921 [2024-11-20 14:09:33.104525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:48:35.921 [2024-11-20 14:09:33.104538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:48:35.921 [2024-11-20 14:09:33.104549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.921 [2024-11-20 14:09:33.104583] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:48:35.921 [2024-11-20 14:09:33.109249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.921 [2024-11-20 14:09:33.109284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:48:35.921 [2024-11-20 14:09:33.109298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.677 ms 00:48:35.921 [2024-11-20 14:09:33.109313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.921 [2024-11-20 14:09:33.109346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.921 [2024-11-20 14:09:33.109357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:48:35.921 [2024-11-20 14:09:33.109369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:48:35.921 [2024-11-20 14:09:33.109380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.921 [2024-11-20 14:09:33.109438] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:48:35.921 [2024-11-20 14:09:33.109463] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:48:35.921 [2024-11-20 14:09:33.109529] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:48:35.921 [2024-11-20 14:09:33.109554] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:48:35.921 [2024-11-20 14:09:33.109647] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:48:35.921 [2024-11-20 14:09:33.109662] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:48:35.921 [2024-11-20 14:09:33.109677] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:48:35.921 [2024-11-20 14:09:33.109692] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:48:35.921 [2024-11-20 14:09:33.109707] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:48:35.921 [2024-11-20 14:09:33.109721] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:48:35.921 [2024-11-20 14:09:33.109732] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:48:35.921 [2024-11-20 14:09:33.109744] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:48:35.921 [2024-11-20 14:09:33.109758] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:48:35.921 [2024-11-20 14:09:33.109771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.921 [2024-11-20 14:09:33.109782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:48:35.921 [2024-11-20 14:09:33.109794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:48:35.921 [2024-11-20 14:09:33.109806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.921 [2024-11-20 14:09:33.109881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.921 [2024-11-20 14:09:33.109893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:48:35.921 [2024-11-20 14:09:33.109906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:48:35.921 [2024-11-20 14:09:33.109917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.921 [2024-11-20 14:09:33.110039] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:48:35.921 [2024-11-20 14:09:33.110056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:48:35.921 [2024-11-20 14:09:33.110068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:48:35.921 [2024-11-20 14:09:33.110080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:35.921 [2024-11-20 14:09:33.110093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:48:35.921 [2024-11-20 14:09:33.110104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:48:35.921 [2024-11-20 14:09:33.110115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:48:35.921 [2024-11-20 14:09:33.110127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:48:35.921 [2024-11-20 14:09:33.110138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:48:35.921 [2024-11-20 14:09:33.110149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:48:35.921 [2024-11-20 14:09:33.110161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:48:35.921 [2024-11-20 14:09:33.110173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:48:35.921 [2024-11-20 14:09:33.110185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:48:35.921 [2024-11-20 14:09:33.110196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:48:35.921 [2024-11-20 14:09:33.110208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:48:35.921 [2024-11-20 14:09:33.110230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:35.921 [2024-11-20 14:09:33.110241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:48:35.921 [2024-11-20 14:09:33.110252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:48:35.921 [2024-11-20 14:09:33.110264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:35.921 [2024-11-20 14:09:33.110275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:48:35.921 [2024-11-20 14:09:33.110287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:48:35.921 [2024-11-20 14:09:33.110298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:35.921 [2024-11-20 14:09:33.110309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:48:35.921 [2024-11-20 14:09:33.110320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:48:35.921 [2024-11-20 14:09:33.110331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:35.921 [2024-11-20 14:09:33.110343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:48:35.921 [2024-11-20 14:09:33.110354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:48:35.921 [2024-11-20 14:09:33.110365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:35.921 [2024-11-20 14:09:33.110376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:48:35.921 [2024-11-20 14:09:33.110387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:48:35.921 [2024-11-20 14:09:33.110398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:35.921 [2024-11-20 14:09:33.110409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:48:35.921 [2024-11-20 14:09:33.110420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:48:35.921 [2024-11-20 14:09:33.110430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:48:35.921 [2024-11-20 14:09:33.110441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:48:35.921 [2024-11-20 14:09:33.110453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:48:35.921 [2024-11-20 14:09:33.110464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:48:35.921 [2024-11-20 14:09:33.110476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:48:35.921 [2024-11-20 14:09:33.110487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:48:35.921 [2024-11-20 14:09:33.110498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:35.921 [2024-11-20 14:09:33.110521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:48:35.921 [2024-11-20 14:09:33.110533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:48:35.921 [2024-11-20 14:09:33.110544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:35.921 [2024-11-20 14:09:33.110557] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:48:35.921 [2024-11-20 14:09:33.110570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:48:35.921 [2024-11-20 14:09:33.110581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:48:35.921 [2024-11-20 14:09:33.110593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:35.921 [2024-11-20 14:09:33.110605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:48:35.921 [2024-11-20 14:09:33.110617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:48:35.921 [2024-11-20 14:09:33.110628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:48:35.921 [2024-11-20 14:09:33.110639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:48:35.921 [2024-11-20 14:09:33.110651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:48:35.921 [2024-11-20 14:09:33.110663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:48:35.921 [2024-11-20 14:09:33.110676] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:48:35.921 [2024-11-20 14:09:33.110689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:35.921 [2024-11-20 14:09:33.110703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:48:35.921 [2024-11-20 14:09:33.110715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:48:35.921 [2024-11-20 14:09:33.110727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:48:35.921 [2024-11-20 14:09:33.110740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:48:35.921 [2024-11-20 14:09:33.110753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:48:35.921 [2024-11-20 14:09:33.110765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:48:35.921 [2024-11-20 14:09:33.110778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:48:35.921 [2024-11-20 14:09:33.110790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:48:35.921 [2024-11-20 14:09:33.110802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:48:35.921 [2024-11-20 14:09:33.110814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:48:35.921 [2024-11-20 14:09:33.110826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:48:35.922 [2024-11-20 14:09:33.110839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:48:35.922 [2024-11-20 14:09:33.110851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:48:35.922 [2024-11-20 14:09:33.110864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:48:35.922 [2024-11-20 14:09:33.110876] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:48:35.922 [2024-11-20 14:09:33.110893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:35.922 [2024-11-20 14:09:33.110906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:48:35.922 [2024-11-20 14:09:33.110919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:48:35.922 [2024-11-20 14:09:33.110931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:48:35.922 [2024-11-20 14:09:33.110943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:48:35.922 [2024-11-20 14:09:33.110956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.922 [2024-11-20 14:09:33.110968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:48:35.922 [2024-11-20 14:09:33.110980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:48:35.922 [2024-11-20 14:09:33.110992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.922 [2024-11-20 14:09:33.148944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.922 [2024-11-20 14:09:33.148993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:48:35.922 [2024-11-20 14:09:33.149010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.900 ms 00:48:35.922 [2024-11-20 14:09:33.149022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.922 [2024-11-20 14:09:33.149111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.922 [2024-11-20 14:09:33.149125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:48:35.922 [2024-11-20 14:09:33.149138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:48:35.922 [2024-11-20 14:09:33.149150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.922 [2024-11-20 14:09:33.209155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.922 [2024-11-20 14:09:33.209402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:48:35.922 [2024-11-20 14:09:33.209428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.932 ms 00:48:35.922 [2024-11-20 14:09:33.209441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.922 [2024-11-20 14:09:33.209487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.922 [2024-11-20 14:09:33.209520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:48:35.922 [2024-11-20 14:09:33.209541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:48:35.922 [2024-11-20 14:09:33.209553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.922 [2024-11-20 14:09:33.210070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.922 [2024-11-20 14:09:33.210086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:48:35.922 [2024-11-20 14:09:33.210099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:48:35.922 [2024-11-20 14:09:33.210111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.922 [2024-11-20 14:09:33.210234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.922 [2024-11-20 14:09:33.210250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:48:35.922 [2024-11-20 14:09:33.210263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:48:35.922 [2024-11-20 14:09:33.210282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.922 [2024-11-20 14:09:33.229385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.922 [2024-11-20 14:09:33.229424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:48:35.922 [2024-11-20 14:09:33.229443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.078 ms 00:48:35.922 [2024-11-20 14:09:33.229455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.182 [2024-11-20 14:09:33.248191] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:48:36.182 [2024-11-20 14:09:33.248365] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:48:36.182 [2024-11-20 14:09:33.248404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:36.182 [2024-11-20 14:09:33.248417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:48:36.182 [2024-11-20 14:09:33.248430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.796 ms 00:48:36.182 [2024-11-20 14:09:33.248442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.182 [2024-11-20 14:09:33.277470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:36.182 [2024-11-20 14:09:33.277518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:48:36.182 [2024-11-20 14:09:33.277533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.968 ms 00:48:36.182 [2024-11-20 14:09:33.277545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.182 [2024-11-20 14:09:33.294849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:36.182 [2024-11-20 14:09:33.294889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:48:36.182 [2024-11-20 14:09:33.294904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.259 ms 00:48:36.182 [2024-11-20 14:09:33.294915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.182 [2024-11-20 14:09:33.312510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:36.182 [2024-11-20 14:09:33.312688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:48:36.182 [2024-11-20 14:09:33.312729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.553 ms 00:48:36.182 [2024-11-20 14:09:33.312741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.182 [2024-11-20 14:09:33.313556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:36.182 [2024-11-20 14:09:33.313593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:48:36.182 [2024-11-20 14:09:33.313607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:48:36.182 [2024-11-20 14:09:33.313624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.182 [2024-11-20 14:09:33.395413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:36.182 [2024-11-20 14:09:33.395732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:48:36.182 [2024-11-20 14:09:33.395771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.760 ms 00:48:36.182 [2024-11-20 14:09:33.395785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.182 [2024-11-20 14:09:33.406519] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:48:36.182 [2024-11-20 14:09:33.409309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:36.182 [2024-11-20 14:09:33.409346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:48:36.182 [2024-11-20 14:09:33.409362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.413 ms 00:48:36.182 [2024-11-20 14:09:33.409376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.182 [2024-11-20 14:09:33.409476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:36.182 [2024-11-20 14:09:33.409512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:48:36.182 [2024-11-20 14:09:33.409526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:48:36.182 [2024-11-20 14:09:33.409542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.182 [2024-11-20 14:09:33.409646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:36.182 [2024-11-20 14:09:33.409660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:48:36.182 [2024-11-20 14:09:33.409673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:48:36.182 [2024-11-20 14:09:33.409685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.182 [2024-11-20 14:09:33.409714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:36.182 [2024-11-20 14:09:33.409727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:48:36.182 [2024-11-20 14:09:33.409739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:48:36.182 [2024-11-20 14:09:33.409751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.182 [2024-11-20 14:09:33.409791] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:48:36.182 [2024-11-20 14:09:33.409806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:36.182 [2024-11-20 14:09:33.409818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:48:36.182 [2024-11-20 14:09:33.409829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:48:36.182 [2024-11-20 14:09:33.409841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.182 [2024-11-20 14:09:33.445360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:36.182 [2024-11-20 14:09:33.445401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:48:36.182 [2024-11-20 14:09:33.445416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.492 ms 00:48:36.182 [2024-11-20 14:09:33.445452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.182 [2024-11-20 14:09:33.445554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:36.182 [2024-11-20 14:09:33.445569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:48:36.182 [2024-11-20 14:09:33.445582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:48:36.182 [2024-11-20 14:09:33.445594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.182 [2024-11-20 14:09:33.446846] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 370.988 ms, result 0 00:48:37.563  [2024-11-20T14:09:35.824Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-20T14:09:36.762Z] Copying: 51/1024 [MB] (26 MBps) [2024-11-20T14:09:37.700Z] Copying: 77/1024 [MB] (25 MBps) [2024-11-20T14:09:38.636Z] Copying: 103/1024 [MB] (26 MBps) [2024-11-20T14:09:39.573Z] Copying: 130/1024 [MB] (26 MBps) [2024-11-20T14:09:40.508Z] Copying: 158/1024 [MB] (27 MBps) [2024-11-20T14:09:41.476Z] Copying: 184/1024 [MB] (26 MBps) [2024-11-20T14:09:42.877Z] Copying: 210/1024 [MB] (25 MBps) [2024-11-20T14:09:43.814Z] Copying: 236/1024 [MB] (26 MBps) [2024-11-20T14:09:44.750Z] Copying: 262/1024 [MB] (26 MBps) [2024-11-20T14:09:45.688Z] Copying: 289/1024 [MB] (26 MBps) [2024-11-20T14:09:46.624Z] Copying: 315/1024 [MB] (26 MBps) [2024-11-20T14:09:47.560Z] Copying: 340/1024 [MB] (25 MBps) [2024-11-20T14:09:48.496Z] Copying: 366/1024 [MB] (25 MBps) [2024-11-20T14:09:49.867Z] Copying: 391/1024 [MB] (25 MBps) [2024-11-20T14:09:50.803Z] Copying: 417/1024 [MB] (25 MBps) [2024-11-20T14:09:51.738Z] Copying: 443/1024 [MB] (25 MBps) [2024-11-20T14:09:52.674Z] Copying: 468/1024 [MB] (25 MBps) [2024-11-20T14:09:53.607Z] Copying: 493/1024 [MB] (24 MBps) [2024-11-20T14:09:54.544Z] Copying: 518/1024 [MB] (25 MBps) [2024-11-20T14:09:55.479Z] Copying: 543/1024 [MB] (24 MBps) [2024-11-20T14:09:56.857Z] Copying: 569/1024 [MB] (25 MBps) [2024-11-20T14:09:57.792Z] Copying: 594/1024 [MB] (25 MBps) [2024-11-20T14:09:58.728Z] Copying: 620/1024 [MB] (25 MBps) [2024-11-20T14:09:59.666Z] Copying: 645/1024 [MB] (25 MBps) [2024-11-20T14:10:00.624Z] Copying: 671/1024 [MB] (25 MBps) [2024-11-20T14:10:01.562Z] Copying: 696/1024 [MB] (25 MBps) [2024-11-20T14:10:02.499Z] Copying: 722/1024 [MB] (25 MBps) [2024-11-20T14:10:03.876Z] Copying: 747/1024 [MB] (25 MBps) [2024-11-20T14:10:04.815Z] Copying: 772/1024 [MB] (24 MBps) [2024-11-20T14:10:05.753Z] Copying: 797/1024 [MB] (25 MBps) [2024-11-20T14:10:06.690Z] Copying: 823/1024 [MB] (25 MBps) [2024-11-20T14:10:07.628Z] Copying: 847/1024 [MB] (24 MBps) [2024-11-20T14:10:08.565Z] Copying: 872/1024 [MB] (25 MBps) [2024-11-20T14:10:09.499Z] Copying: 897/1024 [MB] (24 MBps) [2024-11-20T14:10:10.521Z] Copying: 923/1024 [MB] (25 MBps) [2024-11-20T14:10:11.898Z] Copying: 948/1024 [MB] (24 MBps) [2024-11-20T14:10:12.466Z] Copying: 973/1024 [MB] (25 MBps) [2024-11-20T14:10:13.844Z] Copying: 999/1024 [MB] (25 MBps) [2024-11-20T14:10:14.413Z] Copying: 1023/1024 [MB] (23 MBps) [2024-11-20T14:10:14.413Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-20 14:10:14.255286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.090 [2024-11-20 14:10:14.255489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:49:17.090 [2024-11-20 14:10:14.255623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:49:17.090 [2024-11-20 14:10:14.255742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.090 [2024-11-20 14:10:14.258369] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:49:17.090 [2024-11-20 14:10:14.264831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.090 [2024-11-20 14:10:14.264994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:49:17.090 [2024-11-20 14:10:14.265114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.262 ms 00:49:17.090 [2024-11-20 14:10:14.265203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.090 [2024-11-20 14:10:14.275934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.090 [2024-11-20 14:10:14.276088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:49:17.090 [2024-11-20 14:10:14.276182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.141 ms 00:49:17.090 [2024-11-20 14:10:14.276272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.090 [2024-11-20 14:10:14.297387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.090 [2024-11-20 14:10:14.297584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:49:17.090 [2024-11-20 14:10:14.297728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.059 ms 00:49:17.090 [2024-11-20 14:10:14.297763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.090 [2024-11-20 14:10:14.302801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.090 [2024-11-20 14:10:14.302839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:49:17.090 [2024-11-20 14:10:14.302853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.993 ms 00:49:17.090 [2024-11-20 14:10:14.302882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.090 [2024-11-20 14:10:14.339511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.090 [2024-11-20 14:10:14.339552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:49:17.090 [2024-11-20 14:10:14.339568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.554 ms 00:49:17.090 [2024-11-20 14:10:14.339597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.090 [2024-11-20 14:10:14.360801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.090 [2024-11-20 14:10:14.360852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:49:17.090 [2024-11-20 14:10:14.360868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.161 ms 00:49:17.090 [2024-11-20 14:10:14.360881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.350 [2024-11-20 14:10:14.469267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.350 [2024-11-20 14:10:14.469335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:49:17.350 [2024-11-20 14:10:14.469353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.339 ms 00:49:17.351 [2024-11-20 14:10:14.469366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.351 [2024-11-20 14:10:14.506453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.351 [2024-11-20 14:10:14.506502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:49:17.351 [2024-11-20 14:10:14.506535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.066 ms 00:49:17.351 [2024-11-20 14:10:14.506547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.351 [2024-11-20 14:10:14.542604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.351 [2024-11-20 14:10:14.542659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:49:17.351 [2024-11-20 14:10:14.542674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.014 ms 00:49:17.351 [2024-11-20 14:10:14.542702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.351 [2024-11-20 14:10:14.578657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.351 [2024-11-20 14:10:14.578829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:49:17.351 [2024-11-20 14:10:14.578870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.913 ms 00:49:17.351 [2024-11-20 14:10:14.578882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.351 [2024-11-20 14:10:14.614884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.351 [2024-11-20 14:10:14.614927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:49:17.351 [2024-11-20 14:10:14.614942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.915 ms 00:49:17.351 [2024-11-20 14:10:14.614954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.351 [2024-11-20 14:10:14.614995] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:49:17.351 [2024-11-20 14:10:14.615013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 117248 / 261120 wr_cnt: 1 state: open 00:49:17.351 [2024-11-20 14:10:14.615028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:49:17.351 [2024-11-20 14:10:14.615986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.615998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:49:17.352 [2024-11-20 14:10:14.616327] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:49:17.352 [2024-11-20 14:10:14.616340] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 018e6bcd-70cc-43bb-a7ea-245d85b0b2ca 00:49:17.352 [2024-11-20 14:10:14.616352] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 117248 00:49:17.352 [2024-11-20 14:10:14.616364] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 118208 00:49:17.352 [2024-11-20 14:10:14.616375] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 117248 00:49:17.352 [2024-11-20 14:10:14.616387] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:49:17.352 [2024-11-20 14:10:14.616399] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:49:17.352 [2024-11-20 14:10:14.616418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:49:17.352 [2024-11-20 14:10:14.616442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:49:17.352 [2024-11-20 14:10:14.616453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:49:17.352 [2024-11-20 14:10:14.616464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:49:17.352 [2024-11-20 14:10:14.616475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.352 [2024-11-20 14:10:14.616498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:49:17.352 [2024-11-20 14:10:14.616511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.482 ms 00:49:17.352 [2024-11-20 14:10:14.616523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.352 [2024-11-20 14:10:14.637419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.352 [2024-11-20 14:10:14.637458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:49:17.352 [2024-11-20 14:10:14.637473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.855 ms 00:49:17.352 [2024-11-20 14:10:14.637683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.352 [2024-11-20 14:10:14.638238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.352 [2024-11-20 14:10:14.638257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:49:17.352 [2024-11-20 14:10:14.638272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:49:17.352 [2024-11-20 14:10:14.638285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.612 [2024-11-20 14:10:14.690193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:17.612 [2024-11-20 14:10:14.690238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:17.612 [2024-11-20 14:10:14.690254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:17.612 [2024-11-20 14:10:14.690265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.612 [2024-11-20 14:10:14.690321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:17.612 [2024-11-20 14:10:14.690334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:17.612 [2024-11-20 14:10:14.690346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:17.612 [2024-11-20 14:10:14.690357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.612 [2024-11-20 14:10:14.690453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:17.612 [2024-11-20 14:10:14.690469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:17.612 [2024-11-20 14:10:14.690514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:17.612 [2024-11-20 14:10:14.690543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.612 [2024-11-20 14:10:14.690565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:17.612 [2024-11-20 14:10:14.690578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:17.612 [2024-11-20 14:10:14.690591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:17.612 [2024-11-20 14:10:14.690603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.612 [2024-11-20 14:10:14.815981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:17.612 [2024-11-20 14:10:14.816037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:17.612 [2024-11-20 14:10:14.816061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:17.612 [2024-11-20 14:10:14.816074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.612 [2024-11-20 14:10:14.914165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:17.612 [2024-11-20 14:10:14.914231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:17.612 [2024-11-20 14:10:14.914264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:17.612 [2024-11-20 14:10:14.914278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.612 [2024-11-20 14:10:14.914375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:17.612 [2024-11-20 14:10:14.914389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:17.612 [2024-11-20 14:10:14.914402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:17.612 [2024-11-20 14:10:14.914421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.612 [2024-11-20 14:10:14.914462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:17.612 [2024-11-20 14:10:14.914474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:17.612 [2024-11-20 14:10:14.914487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:17.612 [2024-11-20 14:10:14.914523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.612 [2024-11-20 14:10:14.914639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:17.612 [2024-11-20 14:10:14.914654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:17.612 [2024-11-20 14:10:14.914666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:17.612 [2024-11-20 14:10:14.914678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.612 [2024-11-20 14:10:14.914728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:17.612 [2024-11-20 14:10:14.914742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:49:17.612 [2024-11-20 14:10:14.914754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:17.612 [2024-11-20 14:10:14.914765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.612 [2024-11-20 14:10:14.914806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:17.612 [2024-11-20 14:10:14.914818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:17.612 [2024-11-20 14:10:14.914830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:17.612 [2024-11-20 14:10:14.914841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.612 [2024-11-20 14:10:14.914891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:17.612 [2024-11-20 14:10:14.914905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:17.612 [2024-11-20 14:10:14.914916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:17.612 [2024-11-20 14:10:14.914928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.612 [2024-11-20 14:10:14.915073] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 661.479 ms, result 0 00:49:19.518 00:49:19.518 00:49:19.518 14:10:16 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:49:19.518 [2024-11-20 14:10:16.548173] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:49:19.518 [2024-11-20 14:10:16.548360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81101 ] 00:49:19.518 [2024-11-20 14:10:16.739864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:19.777 [2024-11-20 14:10:16.857416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:20.037 [2024-11-20 14:10:17.215719] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:49:20.037 [2024-11-20 14:10:17.215791] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:49:20.297 [2024-11-20 14:10:17.377913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.297 [2024-11-20 14:10:17.377978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:49:20.297 [2024-11-20 14:10:17.378016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:49:20.297 [2024-11-20 14:10:17.378030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.297 [2024-11-20 14:10:17.378082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.297 [2024-11-20 14:10:17.378096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:20.297 [2024-11-20 14:10:17.378113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:49:20.297 [2024-11-20 14:10:17.378124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.297 [2024-11-20 14:10:17.378150] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:49:20.297 [2024-11-20 14:10:17.379145] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:49:20.297 [2024-11-20 14:10:17.379178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.297 [2024-11-20 14:10:17.379190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:20.297 [2024-11-20 14:10:17.379204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:49:20.297 [2024-11-20 14:10:17.379216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.297 [2024-11-20 14:10:17.380781] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:49:20.297 [2024-11-20 14:10:17.400131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.297 [2024-11-20 14:10:17.400178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:49:20.297 [2024-11-20 14:10:17.400195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.350 ms 00:49:20.297 [2024-11-20 14:10:17.400207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.297 [2024-11-20 14:10:17.400285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.297 [2024-11-20 14:10:17.400300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:49:20.297 [2024-11-20 14:10:17.400312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:49:20.297 [2024-11-20 14:10:17.400324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.297 [2024-11-20 14:10:17.407471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.297 [2024-11-20 14:10:17.407514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:20.297 [2024-11-20 14:10:17.407544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.065 ms 00:49:20.297 [2024-11-20 14:10:17.407563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.297 [2024-11-20 14:10:17.407656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.297 [2024-11-20 14:10:17.407672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:20.297 [2024-11-20 14:10:17.407686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:49:20.297 [2024-11-20 14:10:17.407698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.297 [2024-11-20 14:10:17.407746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.297 [2024-11-20 14:10:17.407760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:49:20.297 [2024-11-20 14:10:17.407773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:49:20.297 [2024-11-20 14:10:17.407785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.297 [2024-11-20 14:10:17.407820] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:49:20.297 [2024-11-20 14:10:17.412973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.297 [2024-11-20 14:10:17.413011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:20.297 [2024-11-20 14:10:17.413025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.164 ms 00:49:20.297 [2024-11-20 14:10:17.413042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.297 [2024-11-20 14:10:17.413077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.297 [2024-11-20 14:10:17.413090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:49:20.297 [2024-11-20 14:10:17.413103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:49:20.297 [2024-11-20 14:10:17.413114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.297 [2024-11-20 14:10:17.413174] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:49:20.297 [2024-11-20 14:10:17.413201] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:49:20.297 [2024-11-20 14:10:17.413240] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:49:20.297 [2024-11-20 14:10:17.413264] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:49:20.297 [2024-11-20 14:10:17.413358] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:49:20.297 [2024-11-20 14:10:17.413373] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:49:20.298 [2024-11-20 14:10:17.413388] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:49:20.298 [2024-11-20 14:10:17.413404] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:49:20.298 [2024-11-20 14:10:17.413418] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:49:20.298 [2024-11-20 14:10:17.413431] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:49:20.298 [2024-11-20 14:10:17.413443] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:49:20.298 [2024-11-20 14:10:17.413454] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:49:20.298 [2024-11-20 14:10:17.413471] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:49:20.298 [2024-11-20 14:10:17.413505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.298 [2024-11-20 14:10:17.413518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:49:20.298 [2024-11-20 14:10:17.413530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:49:20.298 [2024-11-20 14:10:17.413542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.298 [2024-11-20 14:10:17.413636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.298 [2024-11-20 14:10:17.413650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:49:20.298 [2024-11-20 14:10:17.413662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:49:20.298 [2024-11-20 14:10:17.413674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.298 [2024-11-20 14:10:17.413786] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:49:20.298 [2024-11-20 14:10:17.413805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:49:20.298 [2024-11-20 14:10:17.413817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:20.298 [2024-11-20 14:10:17.413829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:20.298 [2024-11-20 14:10:17.413842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:49:20.298 [2024-11-20 14:10:17.413853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:49:20.298 [2024-11-20 14:10:17.413865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:49:20.298 [2024-11-20 14:10:17.413876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:49:20.298 [2024-11-20 14:10:17.413889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:49:20.298 [2024-11-20 14:10:17.413900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:20.298 [2024-11-20 14:10:17.413911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:49:20.298 [2024-11-20 14:10:17.413922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:49:20.298 [2024-11-20 14:10:17.413934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:20.298 [2024-11-20 14:10:17.413945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:49:20.298 [2024-11-20 14:10:17.413956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:49:20.298 [2024-11-20 14:10:17.413978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:20.298 [2024-11-20 14:10:17.413990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:49:20.298 [2024-11-20 14:10:17.414001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:49:20.298 [2024-11-20 14:10:17.414011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:20.298 [2024-11-20 14:10:17.414023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:49:20.298 [2024-11-20 14:10:17.414033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:49:20.298 [2024-11-20 14:10:17.414044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:20.298 [2024-11-20 14:10:17.414056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:49:20.298 [2024-11-20 14:10:17.414067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:49:20.298 [2024-11-20 14:10:17.414077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:20.298 [2024-11-20 14:10:17.414088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:49:20.298 [2024-11-20 14:10:17.414099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:49:20.298 [2024-11-20 14:10:17.414111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:20.298 [2024-11-20 14:10:17.414122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:49:20.298 [2024-11-20 14:10:17.414134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:49:20.298 [2024-11-20 14:10:17.414144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:20.298 [2024-11-20 14:10:17.414155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:49:20.298 [2024-11-20 14:10:17.414166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:49:20.298 [2024-11-20 14:10:17.414177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:20.298 [2024-11-20 14:10:17.414188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:49:20.298 [2024-11-20 14:10:17.414199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:49:20.298 [2024-11-20 14:10:17.414210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:20.298 [2024-11-20 14:10:17.414238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:49:20.298 [2024-11-20 14:10:17.414250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:49:20.298 [2024-11-20 14:10:17.414262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:20.298 [2024-11-20 14:10:17.414274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:49:20.298 [2024-11-20 14:10:17.414286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:49:20.298 [2024-11-20 14:10:17.414297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:20.298 [2024-11-20 14:10:17.414309] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:49:20.298 [2024-11-20 14:10:17.414323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:49:20.298 [2024-11-20 14:10:17.414335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:20.298 [2024-11-20 14:10:17.414348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:20.298 [2024-11-20 14:10:17.414360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:49:20.298 [2024-11-20 14:10:17.414373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:49:20.298 [2024-11-20 14:10:17.414385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:49:20.298 [2024-11-20 14:10:17.414397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:49:20.298 [2024-11-20 14:10:17.414408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:49:20.298 [2024-11-20 14:10:17.414420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:49:20.298 [2024-11-20 14:10:17.414434] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:49:20.298 [2024-11-20 14:10:17.414449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:20.298 [2024-11-20 14:10:17.414463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:49:20.298 [2024-11-20 14:10:17.414477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:49:20.298 [2024-11-20 14:10:17.414490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:49:20.298 [2024-11-20 14:10:17.414516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:49:20.298 [2024-11-20 14:10:17.414530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:49:20.298 [2024-11-20 14:10:17.414543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:49:20.298 [2024-11-20 14:10:17.414557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:49:20.298 [2024-11-20 14:10:17.414571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:49:20.298 [2024-11-20 14:10:17.414584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:49:20.299 [2024-11-20 14:10:17.414597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:49:20.299 [2024-11-20 14:10:17.414610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:49:20.299 [2024-11-20 14:10:17.414623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:49:20.299 [2024-11-20 14:10:17.414636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:49:20.299 [2024-11-20 14:10:17.414649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:49:20.299 [2024-11-20 14:10:17.414662] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:49:20.299 [2024-11-20 14:10:17.414682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:20.299 [2024-11-20 14:10:17.414696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:49:20.299 [2024-11-20 14:10:17.414722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:49:20.299 [2024-11-20 14:10:17.414734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:49:20.299 [2024-11-20 14:10:17.414746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:49:20.299 [2024-11-20 14:10:17.414759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.299 [2024-11-20 14:10:17.414773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:49:20.299 [2024-11-20 14:10:17.414785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:49:20.299 [2024-11-20 14:10:17.414797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.299 [2024-11-20 14:10:17.454903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.299 [2024-11-20 14:10:17.454954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:20.299 [2024-11-20 14:10:17.454969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.051 ms 00:49:20.299 [2024-11-20 14:10:17.454999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.299 [2024-11-20 14:10:17.455095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.299 [2024-11-20 14:10:17.455108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:49:20.299 [2024-11-20 14:10:17.455121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:49:20.299 [2024-11-20 14:10:17.455132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.299 [2024-11-20 14:10:17.509899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.299 [2024-11-20 14:10:17.509947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:20.299 [2024-11-20 14:10:17.509963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.687 ms 00:49:20.299 [2024-11-20 14:10:17.509992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.299 [2024-11-20 14:10:17.510043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.299 [2024-11-20 14:10:17.510056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:20.299 [2024-11-20 14:10:17.510074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:49:20.299 [2024-11-20 14:10:17.510085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.299 [2024-11-20 14:10:17.510625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.299 [2024-11-20 14:10:17.510642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:20.299 [2024-11-20 14:10:17.510654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:49:20.299 [2024-11-20 14:10:17.510666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.299 [2024-11-20 14:10:17.510788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.299 [2024-11-20 14:10:17.510803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:20.299 [2024-11-20 14:10:17.510815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:49:20.299 [2024-11-20 14:10:17.510833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.299 [2024-11-20 14:10:17.532654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.299 [2024-11-20 14:10:17.532697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:20.299 [2024-11-20 14:10:17.532718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.794 ms 00:49:20.299 [2024-11-20 14:10:17.532731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.299 [2024-11-20 14:10:17.551497] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:49:20.299 [2024-11-20 14:10:17.551560] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:49:20.299 [2024-11-20 14:10:17.551578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.299 [2024-11-20 14:10:17.551590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:49:20.299 [2024-11-20 14:10:17.551604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.714 ms 00:49:20.299 [2024-11-20 14:10:17.551616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.299 [2024-11-20 14:10:17.581448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.299 [2024-11-20 14:10:17.581506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:49:20.299 [2024-11-20 14:10:17.581539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.777 ms 00:49:20.299 [2024-11-20 14:10:17.581551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.299 [2024-11-20 14:10:17.599793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.299 [2024-11-20 14:10:17.599848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:49:20.299 [2024-11-20 14:10:17.599863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.192 ms 00:49:20.299 [2024-11-20 14:10:17.599891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.559 [2024-11-20 14:10:17.617982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.559 [2024-11-20 14:10:17.618017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:49:20.559 [2024-11-20 14:10:17.618048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.047 ms 00:49:20.559 [2024-11-20 14:10:17.618060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.559 [2024-11-20 14:10:17.618914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.559 [2024-11-20 14:10:17.618942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:49:20.559 [2024-11-20 14:10:17.618956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:49:20.559 [2024-11-20 14:10:17.618973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.559 [2024-11-20 14:10:17.704216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.559 [2024-11-20 14:10:17.704299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:49:20.559 [2024-11-20 14:10:17.704326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.215 ms 00:49:20.559 [2024-11-20 14:10:17.704339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.559 [2024-11-20 14:10:17.715214] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:49:20.559 [2024-11-20 14:10:17.718336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.559 [2024-11-20 14:10:17.718365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:49:20.559 [2024-11-20 14:10:17.718398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.929 ms 00:49:20.559 [2024-11-20 14:10:17.718411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.559 [2024-11-20 14:10:17.718529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.559 [2024-11-20 14:10:17.718545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:49:20.559 [2024-11-20 14:10:17.718559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:49:20.559 [2024-11-20 14:10:17.718575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.559 [2024-11-20 14:10:17.720194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.559 [2024-11-20 14:10:17.720235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:49:20.559 [2024-11-20 14:10:17.720250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.550 ms 00:49:20.559 [2024-11-20 14:10:17.720263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.559 [2024-11-20 14:10:17.720305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.559 [2024-11-20 14:10:17.720318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:49:20.559 [2024-11-20 14:10:17.720331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:49:20.559 [2024-11-20 14:10:17.720342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.559 [2024-11-20 14:10:17.720390] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:49:20.559 [2024-11-20 14:10:17.720404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.559 [2024-11-20 14:10:17.720416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:49:20.559 [2024-11-20 14:10:17.720428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:49:20.559 [2024-11-20 14:10:17.720440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.559 [2024-11-20 14:10:17.757164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.559 [2024-11-20 14:10:17.757203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:49:20.559 [2024-11-20 14:10:17.757235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.700 ms 00:49:20.559 [2024-11-20 14:10:17.757255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.559 [2024-11-20 14:10:17.757335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:20.559 [2024-11-20 14:10:17.757349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:49:20.559 [2024-11-20 14:10:17.757361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:49:20.559 [2024-11-20 14:10:17.757373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:20.559 [2024-11-20 14:10:17.758545] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 380.148 ms, result 0 00:49:21.936  [2024-11-20T14:10:20.195Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-20T14:10:21.133Z] Copying: 51/1024 [MB] (27 MBps) [2024-11-20T14:10:22.069Z] Copying: 80/1024 [MB] (29 MBps) [2024-11-20T14:10:23.010Z] Copying: 108/1024 [MB] (28 MBps) [2024-11-20T14:10:24.388Z] Copying: 136/1024 [MB] (27 MBps) [2024-11-20T14:10:25.325Z] Copying: 163/1024 [MB] (27 MBps) [2024-11-20T14:10:26.258Z] Copying: 190/1024 [MB] (27 MBps) [2024-11-20T14:10:27.239Z] Copying: 217/1024 [MB] (27 MBps) [2024-11-20T14:10:28.176Z] Copying: 244/1024 [MB] (26 MBps) [2024-11-20T14:10:29.111Z] Copying: 270/1024 [MB] (26 MBps) [2024-11-20T14:10:30.045Z] Copying: 298/1024 [MB] (27 MBps) [2024-11-20T14:10:31.421Z] Copying: 326/1024 [MB] (27 MBps) [2024-11-20T14:10:31.988Z] Copying: 354/1024 [MB] (27 MBps) [2024-11-20T14:10:33.367Z] Copying: 382/1024 [MB] (28 MBps) [2024-11-20T14:10:34.304Z] Copying: 411/1024 [MB] (29 MBps) [2024-11-20T14:10:35.241Z] Copying: 440/1024 [MB] (29 MBps) [2024-11-20T14:10:36.178Z] Copying: 471/1024 [MB] (31 MBps) [2024-11-20T14:10:37.114Z] Copying: 502/1024 [MB] (31 MBps) [2024-11-20T14:10:38.051Z] Copying: 535/1024 [MB] (32 MBps) [2024-11-20T14:10:38.986Z] Copying: 563/1024 [MB] (28 MBps) [2024-11-20T14:10:40.362Z] Copying: 594/1024 [MB] (30 MBps) [2024-11-20T14:10:41.300Z] Copying: 625/1024 [MB] (30 MBps) [2024-11-20T14:10:42.230Z] Copying: 656/1024 [MB] (31 MBps) [2024-11-20T14:10:43.167Z] Copying: 688/1024 [MB] (31 MBps) [2024-11-20T14:10:44.104Z] Copying: 719/1024 [MB] (31 MBps) [2024-11-20T14:10:45.039Z] Copying: 750/1024 [MB] (30 MBps) [2024-11-20T14:10:46.412Z] Copying: 781/1024 [MB] (30 MBps) [2024-11-20T14:10:47.351Z] Copying: 811/1024 [MB] (30 MBps) [2024-11-20T14:10:48.289Z] Copying: 841/1024 [MB] (29 MBps) [2024-11-20T14:10:49.226Z] Copying: 871/1024 [MB] (30 MBps) [2024-11-20T14:10:50.163Z] Copying: 903/1024 [MB] (31 MBps) [2024-11-20T14:10:51.100Z] Copying: 936/1024 [MB] (32 MBps) [2024-11-20T14:10:52.037Z] Copying: 966/1024 [MB] (29 MBps) [2024-11-20T14:10:52.974Z] Copying: 996/1024 [MB] (29 MBps) [2024-11-20T14:10:53.543Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-20 14:10:53.294039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.220 [2024-11-20 14:10:53.294115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:49:56.220 [2024-11-20 14:10:53.294140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:49:56.220 [2024-11-20 14:10:53.294169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.220 [2024-11-20 14:10:53.294203] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:49:56.220 [2024-11-20 14:10:53.300717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.220 [2024-11-20 14:10:53.300753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:49:56.220 [2024-11-20 14:10:53.300766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.488 ms 00:49:56.220 [2024-11-20 14:10:53.300777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.220 [2024-11-20 14:10:53.300988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.220 [2024-11-20 14:10:53.301001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:49:56.220 [2024-11-20 14:10:53.301012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:49:56.220 [2024-11-20 14:10:53.301023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.220 [2024-11-20 14:10:53.304818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.220 [2024-11-20 14:10:53.304855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:49:56.220 [2024-11-20 14:10:53.304869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.772 ms 00:49:56.220 [2024-11-20 14:10:53.304881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.220 [2024-11-20 14:10:53.310588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.220 [2024-11-20 14:10:53.310620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:49:56.220 [2024-11-20 14:10:53.310633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.666 ms 00:49:56.220 [2024-11-20 14:10:53.310643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.220 [2024-11-20 14:10:53.348162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.220 [2024-11-20 14:10:53.348199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:49:56.220 [2024-11-20 14:10:53.348213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.463 ms 00:49:56.220 [2024-11-20 14:10:53.348223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.220 [2024-11-20 14:10:53.369501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.220 [2024-11-20 14:10:53.369551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:49:56.220 [2024-11-20 14:10:53.369565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.230 ms 00:49:56.220 [2024-11-20 14:10:53.369576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.220 [2024-11-20 14:10:53.477146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.220 [2024-11-20 14:10:53.477215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:49:56.220 [2024-11-20 14:10:53.477231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.524 ms 00:49:56.220 [2024-11-20 14:10:53.477241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.220 [2024-11-20 14:10:53.516444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.220 [2024-11-20 14:10:53.516498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:49:56.220 [2024-11-20 14:10:53.516514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.184 ms 00:49:56.220 [2024-11-20 14:10:53.516524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.481 [2024-11-20 14:10:53.554876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.481 [2024-11-20 14:10:53.554914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:49:56.481 [2024-11-20 14:10:53.554958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.311 ms 00:49:56.481 [2024-11-20 14:10:53.554968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.481 [2024-11-20 14:10:53.592896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.481 [2024-11-20 14:10:53.592956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:49:56.481 [2024-11-20 14:10:53.592972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.886 ms 00:49:56.481 [2024-11-20 14:10:53.592982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.481 [2024-11-20 14:10:53.630302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.481 [2024-11-20 14:10:53.630343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:49:56.481 [2024-11-20 14:10:53.630373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.238 ms 00:49:56.481 [2024-11-20 14:10:53.630383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.481 [2024-11-20 14:10:53.630432] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:49:56.481 [2024-11-20 14:10:53.630450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:49:56.481 [2024-11-20 14:10:53.630463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.630990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:49:56.481 [2024-11-20 14:10:53.631172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:49:56.482 [2024-11-20 14:10:53.631597] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:49:56.482 [2024-11-20 14:10:53.631608] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 018e6bcd-70cc-43bb-a7ea-245d85b0b2ca 00:49:56.482 [2024-11-20 14:10:53.631620] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:49:56.482 [2024-11-20 14:10:53.631639] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 14784 00:49:56.482 [2024-11-20 14:10:53.631650] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 13824 00:49:56.482 [2024-11-20 14:10:53.631661] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0694 00:49:56.482 [2024-11-20 14:10:53.631672] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:49:56.482 [2024-11-20 14:10:53.631689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:49:56.482 [2024-11-20 14:10:53.631700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:49:56.482 [2024-11-20 14:10:53.631722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:49:56.482 [2024-11-20 14:10:53.631732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:49:56.482 [2024-11-20 14:10:53.631743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.482 [2024-11-20 14:10:53.631755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:49:56.482 [2024-11-20 14:10:53.631765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.312 ms 00:49:56.482 [2024-11-20 14:10:53.631776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.482 [2024-11-20 14:10:53.652066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.482 [2024-11-20 14:10:53.652102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:49:56.482 [2024-11-20 14:10:53.652116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.248 ms 00:49:56.482 [2024-11-20 14:10:53.652132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.482 [2024-11-20 14:10:53.652731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:56.482 [2024-11-20 14:10:53.652750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:49:56.482 [2024-11-20 14:10:53.652761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:49:56.482 [2024-11-20 14:10:53.652771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.482 [2024-11-20 14:10:53.705811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:56.482 [2024-11-20 14:10:53.705853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:56.482 [2024-11-20 14:10:53.705882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:56.482 [2024-11-20 14:10:53.705892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.482 [2024-11-20 14:10:53.705949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:56.482 [2024-11-20 14:10:53.705959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:56.482 [2024-11-20 14:10:53.705970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:56.482 [2024-11-20 14:10:53.705979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.482 [2024-11-20 14:10:53.706048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:56.482 [2024-11-20 14:10:53.706061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:56.482 [2024-11-20 14:10:53.706076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:56.482 [2024-11-20 14:10:53.706086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.482 [2024-11-20 14:10:53.706103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:56.482 [2024-11-20 14:10:53.706114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:56.482 [2024-11-20 14:10:53.706124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:56.482 [2024-11-20 14:10:53.706134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.742 [2024-11-20 14:10:53.834507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:56.742 [2024-11-20 14:10:53.834554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:56.742 [2024-11-20 14:10:53.834574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:56.742 [2024-11-20 14:10:53.834585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.742 [2024-11-20 14:10:53.937544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:56.742 [2024-11-20 14:10:53.937590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:56.742 [2024-11-20 14:10:53.937621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:56.742 [2024-11-20 14:10:53.937632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.742 [2024-11-20 14:10:53.937722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:56.742 [2024-11-20 14:10:53.937736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:56.742 [2024-11-20 14:10:53.937747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:56.742 [2024-11-20 14:10:53.937763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.742 [2024-11-20 14:10:53.937810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:56.742 [2024-11-20 14:10:53.937822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:56.742 [2024-11-20 14:10:53.937833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:56.742 [2024-11-20 14:10:53.937843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.742 [2024-11-20 14:10:53.937951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:56.742 [2024-11-20 14:10:53.937964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:56.742 [2024-11-20 14:10:53.937974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:56.742 [2024-11-20 14:10:53.937984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.742 [2024-11-20 14:10:53.938025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:56.742 [2024-11-20 14:10:53.938038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:49:56.742 [2024-11-20 14:10:53.938048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:56.742 [2024-11-20 14:10:53.938058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.742 [2024-11-20 14:10:53.938096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:56.742 [2024-11-20 14:10:53.938108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:56.742 [2024-11-20 14:10:53.938118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:56.742 [2024-11-20 14:10:53.938128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.743 [2024-11-20 14:10:53.938172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:56.743 [2024-11-20 14:10:53.938185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:56.743 [2024-11-20 14:10:53.938195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:56.743 [2024-11-20 14:10:53.938205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:56.743 [2024-11-20 14:10:53.938331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 644.258 ms, result 0 00:49:57.681 00:49:57.681 00:49:57.940 14:10:55 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:49:59.843 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:49:59.843 14:10:56 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:49:59.843 14:10:56 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:49:59.843 14:10:56 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:49:59.843 14:10:56 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:49:59.843 14:10:57 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:49:59.843 14:10:57 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79587 00:49:59.843 14:10:57 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79587 ']' 00:49:59.843 14:10:57 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79587 00:49:59.843 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79587) - No such process 00:49:59.843 Process with pid 79587 is not found 00:49:59.843 14:10:57 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79587 is not found' 00:49:59.843 14:10:57 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:49:59.844 Remove shared memory files 00:49:59.844 14:10:57 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:49:59.844 14:10:57 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:49:59.844 14:10:57 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:49:59.844 14:10:57 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:49:59.844 14:10:57 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:49:59.844 14:10:57 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:49:59.844 00:49:59.844 real 3m8.928s 00:49:59.844 user 2m54.993s 00:49:59.844 sys 0m15.158s 00:49:59.844 14:10:57 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:59.844 14:10:57 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:49:59.844 ************************************ 00:49:59.844 END TEST ftl_restore 00:49:59.844 ************************************ 00:49:59.844 14:10:57 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:49:59.844 14:10:57 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:49:59.844 14:10:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:59.844 14:10:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:49:59.844 ************************************ 00:49:59.844 START TEST ftl_dirty_shutdown 00:49:59.844 ************************************ 00:49:59.844 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:50:00.103 * Looking for test storage... 00:50:00.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:50:00.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:00.103 --rc genhtml_branch_coverage=1 00:50:00.103 --rc genhtml_function_coverage=1 00:50:00.103 --rc genhtml_legend=1 00:50:00.103 --rc geninfo_all_blocks=1 00:50:00.103 --rc geninfo_unexecuted_blocks=1 00:50:00.103 00:50:00.103 ' 00:50:00.103 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:50:00.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:00.103 --rc genhtml_branch_coverage=1 00:50:00.103 --rc genhtml_function_coverage=1 00:50:00.103 --rc genhtml_legend=1 00:50:00.104 --rc geninfo_all_blocks=1 00:50:00.104 --rc geninfo_unexecuted_blocks=1 00:50:00.104 00:50:00.104 ' 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:50:00.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:00.104 --rc genhtml_branch_coverage=1 00:50:00.104 --rc genhtml_function_coverage=1 00:50:00.104 --rc genhtml_legend=1 00:50:00.104 --rc geninfo_all_blocks=1 00:50:00.104 --rc geninfo_unexecuted_blocks=1 00:50:00.104 00:50:00.104 ' 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:50:00.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:00.104 --rc genhtml_branch_coverage=1 00:50:00.104 --rc genhtml_function_coverage=1 00:50:00.104 --rc genhtml_legend=1 00:50:00.104 --rc geninfo_all_blocks=1 00:50:00.104 --rc geninfo_unexecuted_blocks=1 00:50:00.104 00:50:00.104 ' 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81571 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81571 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81571 ']' 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:00.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:00.104 14:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:50:00.362 [2024-11-20 14:10:57.467986] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:50:00.362 [2024-11-20 14:10:57.468273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81571 ] 00:50:00.621 [2024-11-20 14:10:57.692088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:00.621 [2024-11-20 14:10:57.874283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:01.559 14:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:01.559 14:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:50:01.559 14:10:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:50:01.559 14:10:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:50:01.559 14:10:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:50:01.559 14:10:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:50:01.559 14:10:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:50:01.559 14:10:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:50:01.820 14:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:50:01.820 14:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:50:01.820 14:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:50:01.820 14:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:50:01.820 14:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:50:01.820 14:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:50:01.820 14:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:50:01.820 14:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:50:02.078 14:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:50:02.078 { 00:50:02.078 "name": "nvme0n1", 00:50:02.078 "aliases": [ 00:50:02.078 "0df6754b-16f9-47b3-86f5-be9a150a9345" 00:50:02.078 ], 00:50:02.078 "product_name": "NVMe disk", 00:50:02.078 "block_size": 4096, 00:50:02.078 "num_blocks": 1310720, 00:50:02.078 "uuid": "0df6754b-16f9-47b3-86f5-be9a150a9345", 00:50:02.078 "numa_id": -1, 00:50:02.078 "assigned_rate_limits": { 00:50:02.078 "rw_ios_per_sec": 0, 00:50:02.078 "rw_mbytes_per_sec": 0, 00:50:02.078 "r_mbytes_per_sec": 0, 00:50:02.078 "w_mbytes_per_sec": 0 00:50:02.078 }, 00:50:02.078 "claimed": true, 00:50:02.078 "claim_type": "read_many_write_one", 00:50:02.078 "zoned": false, 00:50:02.078 "supported_io_types": { 00:50:02.078 "read": true, 00:50:02.078 "write": true, 00:50:02.078 "unmap": true, 00:50:02.078 "flush": true, 00:50:02.078 "reset": true, 00:50:02.078 "nvme_admin": true, 00:50:02.078 "nvme_io": true, 00:50:02.078 "nvme_io_md": false, 00:50:02.078 "write_zeroes": true, 00:50:02.078 "zcopy": false, 00:50:02.078 "get_zone_info": false, 00:50:02.078 "zone_management": false, 00:50:02.078 "zone_append": false, 00:50:02.078 "compare": true, 00:50:02.078 "compare_and_write": false, 00:50:02.078 "abort": true, 00:50:02.078 "seek_hole": false, 00:50:02.078 "seek_data": false, 00:50:02.078 "copy": true, 00:50:02.078 "nvme_iov_md": false 00:50:02.078 }, 00:50:02.078 "driver_specific": { 00:50:02.078 "nvme": [ 00:50:02.078 { 00:50:02.078 "pci_address": "0000:00:11.0", 00:50:02.078 "trid": { 00:50:02.078 "trtype": "PCIe", 00:50:02.078 "traddr": "0000:00:11.0" 00:50:02.078 }, 00:50:02.078 "ctrlr_data": { 00:50:02.078 "cntlid": 0, 00:50:02.078 "vendor_id": "0x1b36", 00:50:02.079 "model_number": "QEMU NVMe Ctrl", 00:50:02.079 "serial_number": "12341", 00:50:02.079 "firmware_revision": "8.0.0", 00:50:02.079 "subnqn": "nqn.2019-08.org.qemu:12341", 00:50:02.079 "oacs": { 00:50:02.079 "security": 0, 00:50:02.079 "format": 1, 00:50:02.079 "firmware": 0, 00:50:02.079 "ns_manage": 1 00:50:02.079 }, 00:50:02.079 "multi_ctrlr": false, 00:50:02.079 "ana_reporting": false 00:50:02.079 }, 00:50:02.079 "vs": { 00:50:02.079 "nvme_version": "1.4" 00:50:02.079 }, 00:50:02.079 "ns_data": { 00:50:02.079 "id": 1, 00:50:02.079 "can_share": false 00:50:02.079 } 00:50:02.079 } 00:50:02.079 ], 00:50:02.079 "mp_policy": "active_passive" 00:50:02.079 } 00:50:02.079 } 00:50:02.079 ]' 00:50:02.079 14:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:50:02.341 14:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:50:02.341 14:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:50:02.341 14:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:50:02.341 14:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:50:02.341 14:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:50:02.341 14:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:50:02.341 14:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:50:02.341 14:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:50:02.341 14:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:50:02.341 14:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:50:02.604 14:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=eda08c77-76d3-4558-a235-34a0ac2b9bac 00:50:02.604 14:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:50:02.604 14:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eda08c77-76d3-4558-a235-34a0ac2b9bac 00:50:02.862 14:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=07c130f2-1dfc-49a9-a398-0e12c7879a7a 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 07c130f2-1dfc-49a9-a398-0e12c7879a7a 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=55d3a8e1-2660-4db1-9897-241e6a64fa98 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 55d3a8e1-2660-4db1-9897-241e6a64fa98 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=55d3a8e1-2660-4db1-9897-241e6a64fa98 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 55d3a8e1-2660-4db1-9897-241e6a64fa98 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=55d3a8e1-2660-4db1-9897-241e6a64fa98 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:50:03.122 14:11:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55d3a8e1-2660-4db1-9897-241e6a64fa98 00:50:03.381 14:11:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:50:03.381 { 00:50:03.381 "name": "55d3a8e1-2660-4db1-9897-241e6a64fa98", 00:50:03.381 "aliases": [ 00:50:03.381 "lvs/nvme0n1p0" 00:50:03.381 ], 00:50:03.381 "product_name": "Logical Volume", 00:50:03.381 "block_size": 4096, 00:50:03.381 "num_blocks": 26476544, 00:50:03.381 "uuid": "55d3a8e1-2660-4db1-9897-241e6a64fa98", 00:50:03.381 "assigned_rate_limits": { 00:50:03.381 "rw_ios_per_sec": 0, 00:50:03.381 "rw_mbytes_per_sec": 0, 00:50:03.381 "r_mbytes_per_sec": 0, 00:50:03.381 "w_mbytes_per_sec": 0 00:50:03.381 }, 00:50:03.381 "claimed": false, 00:50:03.381 "zoned": false, 00:50:03.381 "supported_io_types": { 00:50:03.381 "read": true, 00:50:03.381 "write": true, 00:50:03.381 "unmap": true, 00:50:03.381 "flush": false, 00:50:03.381 "reset": true, 00:50:03.381 "nvme_admin": false, 00:50:03.381 "nvme_io": false, 00:50:03.381 "nvme_io_md": false, 00:50:03.381 "write_zeroes": true, 00:50:03.381 "zcopy": false, 00:50:03.381 "get_zone_info": false, 00:50:03.381 "zone_management": false, 00:50:03.381 "zone_append": false, 00:50:03.381 "compare": false, 00:50:03.381 "compare_and_write": false, 00:50:03.381 "abort": false, 00:50:03.381 "seek_hole": true, 00:50:03.381 "seek_data": true, 00:50:03.381 "copy": false, 00:50:03.381 "nvme_iov_md": false 00:50:03.381 }, 00:50:03.381 "driver_specific": { 00:50:03.381 "lvol": { 00:50:03.381 "lvol_store_uuid": "07c130f2-1dfc-49a9-a398-0e12c7879a7a", 00:50:03.381 "base_bdev": "nvme0n1", 00:50:03.381 "thin_provision": true, 00:50:03.381 "num_allocated_clusters": 0, 00:50:03.381 "snapshot": false, 00:50:03.381 "clone": false, 00:50:03.381 "esnap_clone": false 00:50:03.381 } 00:50:03.381 } 00:50:03.381 } 00:50:03.381 ]' 00:50:03.381 14:11:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:50:03.639 14:11:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:50:03.639 14:11:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:50:03.639 14:11:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:50:03.639 14:11:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:50:03.639 14:11:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:50:03.639 14:11:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:50:03.639 14:11:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:50:03.639 14:11:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:50:03.898 14:11:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:50:03.898 14:11:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:50:03.898 14:11:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 55d3a8e1-2660-4db1-9897-241e6a64fa98 00:50:03.898 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=55d3a8e1-2660-4db1-9897-241e6a64fa98 00:50:03.898 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:50:03.898 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:50:03.898 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:50:03.898 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55d3a8e1-2660-4db1-9897-241e6a64fa98 00:50:04.158 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:50:04.158 { 00:50:04.158 "name": "55d3a8e1-2660-4db1-9897-241e6a64fa98", 00:50:04.158 "aliases": [ 00:50:04.158 "lvs/nvme0n1p0" 00:50:04.158 ], 00:50:04.158 "product_name": "Logical Volume", 00:50:04.158 "block_size": 4096, 00:50:04.158 "num_blocks": 26476544, 00:50:04.158 "uuid": "55d3a8e1-2660-4db1-9897-241e6a64fa98", 00:50:04.158 "assigned_rate_limits": { 00:50:04.158 "rw_ios_per_sec": 0, 00:50:04.158 "rw_mbytes_per_sec": 0, 00:50:04.158 "r_mbytes_per_sec": 0, 00:50:04.158 "w_mbytes_per_sec": 0 00:50:04.158 }, 00:50:04.158 "claimed": false, 00:50:04.158 "zoned": false, 00:50:04.158 "supported_io_types": { 00:50:04.158 "read": true, 00:50:04.158 "write": true, 00:50:04.158 "unmap": true, 00:50:04.158 "flush": false, 00:50:04.158 "reset": true, 00:50:04.158 "nvme_admin": false, 00:50:04.158 "nvme_io": false, 00:50:04.158 "nvme_io_md": false, 00:50:04.158 "write_zeroes": true, 00:50:04.158 "zcopy": false, 00:50:04.158 "get_zone_info": false, 00:50:04.158 "zone_management": false, 00:50:04.158 "zone_append": false, 00:50:04.158 "compare": false, 00:50:04.158 "compare_and_write": false, 00:50:04.158 "abort": false, 00:50:04.158 "seek_hole": true, 00:50:04.158 "seek_data": true, 00:50:04.158 "copy": false, 00:50:04.158 "nvme_iov_md": false 00:50:04.158 }, 00:50:04.158 "driver_specific": { 00:50:04.158 "lvol": { 00:50:04.158 "lvol_store_uuid": "07c130f2-1dfc-49a9-a398-0e12c7879a7a", 00:50:04.158 "base_bdev": "nvme0n1", 00:50:04.158 "thin_provision": true, 00:50:04.158 "num_allocated_clusters": 0, 00:50:04.158 "snapshot": false, 00:50:04.158 "clone": false, 00:50:04.158 "esnap_clone": false 00:50:04.158 } 00:50:04.158 } 00:50:04.158 } 00:50:04.158 ]' 00:50:04.158 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:50:04.158 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:50:04.158 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:50:04.158 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:50:04.158 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:50:04.158 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:50:04.158 14:11:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:50:04.158 14:11:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:50:04.416 14:11:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:50:04.416 14:11:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 55d3a8e1-2660-4db1-9897-241e6a64fa98 00:50:04.416 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=55d3a8e1-2660-4db1-9897-241e6a64fa98 00:50:04.416 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:50:04.416 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:50:04.416 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:50:04.416 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55d3a8e1-2660-4db1-9897-241e6a64fa98 00:50:04.676 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:50:04.676 { 00:50:04.676 "name": "55d3a8e1-2660-4db1-9897-241e6a64fa98", 00:50:04.676 "aliases": [ 00:50:04.676 "lvs/nvme0n1p0" 00:50:04.676 ], 00:50:04.676 "product_name": "Logical Volume", 00:50:04.676 "block_size": 4096, 00:50:04.676 "num_blocks": 26476544, 00:50:04.676 "uuid": "55d3a8e1-2660-4db1-9897-241e6a64fa98", 00:50:04.676 "assigned_rate_limits": { 00:50:04.676 "rw_ios_per_sec": 0, 00:50:04.676 "rw_mbytes_per_sec": 0, 00:50:04.676 "r_mbytes_per_sec": 0, 00:50:04.676 "w_mbytes_per_sec": 0 00:50:04.676 }, 00:50:04.676 "claimed": false, 00:50:04.676 "zoned": false, 00:50:04.676 "supported_io_types": { 00:50:04.676 "read": true, 00:50:04.676 "write": true, 00:50:04.676 "unmap": true, 00:50:04.676 "flush": false, 00:50:04.676 "reset": true, 00:50:04.676 "nvme_admin": false, 00:50:04.676 "nvme_io": false, 00:50:04.676 "nvme_io_md": false, 00:50:04.676 "write_zeroes": true, 00:50:04.676 "zcopy": false, 00:50:04.676 "get_zone_info": false, 00:50:04.676 "zone_management": false, 00:50:04.676 "zone_append": false, 00:50:04.676 "compare": false, 00:50:04.676 "compare_and_write": false, 00:50:04.676 "abort": false, 00:50:04.676 "seek_hole": true, 00:50:04.676 "seek_data": true, 00:50:04.676 "copy": false, 00:50:04.676 "nvme_iov_md": false 00:50:04.676 }, 00:50:04.676 "driver_specific": { 00:50:04.676 "lvol": { 00:50:04.676 "lvol_store_uuid": "07c130f2-1dfc-49a9-a398-0e12c7879a7a", 00:50:04.676 "base_bdev": "nvme0n1", 00:50:04.676 "thin_provision": true, 00:50:04.676 "num_allocated_clusters": 0, 00:50:04.676 "snapshot": false, 00:50:04.676 "clone": false, 00:50:04.676 "esnap_clone": false 00:50:04.676 } 00:50:04.676 } 00:50:04.676 } 00:50:04.676 ]' 00:50:04.676 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:50:04.676 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:50:04.676 14:11:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:50:04.935 14:11:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:50:04.935 14:11:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:50:04.935 14:11:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:50:04.935 14:11:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:50:04.935 14:11:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 55d3a8e1-2660-4db1-9897-241e6a64fa98 --l2p_dram_limit 10' 00:50:04.935 14:11:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:50:04.935 14:11:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:50:04.935 14:11:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:50:04.935 14:11:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 55d3a8e1-2660-4db1-9897-241e6a64fa98 --l2p_dram_limit 10 -c nvc0n1p0 00:50:04.935 [2024-11-20 14:11:02.242392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:04.935 [2024-11-20 14:11:02.242449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:50:04.935 [2024-11-20 14:11:02.242469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:50:04.935 [2024-11-20 14:11:02.242507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:04.935 [2024-11-20 14:11:02.242580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:04.935 [2024-11-20 14:11:02.242594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:04.935 [2024-11-20 14:11:02.242607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:50:04.935 [2024-11-20 14:11:02.242618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:04.935 [2024-11-20 14:11:02.242642] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:50:04.935 [2024-11-20 14:11:02.243748] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:50:04.935 [2024-11-20 14:11:02.243784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:04.935 [2024-11-20 14:11:02.243795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:04.935 [2024-11-20 14:11:02.243810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:50:04.936 [2024-11-20 14:11:02.243820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:04.936 [2024-11-20 14:11:02.244004] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0bcf76a6-8843-4978-96b3-1dfc317e1622 00:50:04.936 [2024-11-20 14:11:02.245438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:04.936 [2024-11-20 14:11:02.245488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:50:04.936 [2024-11-20 14:11:02.245503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:50:04.936 [2024-11-20 14:11:02.245518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:04.936 [2024-11-20 14:11:02.253092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:04.936 [2024-11-20 14:11:02.253143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:04.936 [2024-11-20 14:11:02.253173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.529 ms 00:50:04.936 [2024-11-20 14:11:02.253186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:04.936 [2024-11-20 14:11:02.253306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:04.936 [2024-11-20 14:11:02.253327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:04.936 [2024-11-20 14:11:02.253338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:50:04.936 [2024-11-20 14:11:02.253356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:04.936 [2024-11-20 14:11:02.253438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:04.936 [2024-11-20 14:11:02.253455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:50:04.936 [2024-11-20 14:11:02.253466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:50:04.936 [2024-11-20 14:11:02.253482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:04.936 [2024-11-20 14:11:02.253520] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:05.195 [2024-11-20 14:11:02.258441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:05.195 [2024-11-20 14:11:02.258489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:05.195 [2024-11-20 14:11:02.258523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.924 ms 00:50:05.195 [2024-11-20 14:11:02.258534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:05.195 [2024-11-20 14:11:02.258575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:05.195 [2024-11-20 14:11:02.258586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:50:05.195 [2024-11-20 14:11:02.258599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:50:05.195 [2024-11-20 14:11:02.258609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:05.195 [2024-11-20 14:11:02.258653] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:50:05.195 [2024-11-20 14:11:02.258786] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:50:05.195 [2024-11-20 14:11:02.258806] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:50:05.195 [2024-11-20 14:11:02.258821] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:50:05.195 [2024-11-20 14:11:02.258837] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:50:05.195 [2024-11-20 14:11:02.258849] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:50:05.195 [2024-11-20 14:11:02.258863] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:50:05.195 [2024-11-20 14:11:02.258873] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:50:05.195 [2024-11-20 14:11:02.258888] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:50:05.195 [2024-11-20 14:11:02.258898] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:50:05.195 [2024-11-20 14:11:02.258910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:05.195 [2024-11-20 14:11:02.258921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:50:05.195 [2024-11-20 14:11:02.258934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:50:05.195 [2024-11-20 14:11:02.258955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:05.196 [2024-11-20 14:11:02.259034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:05.196 [2024-11-20 14:11:02.259049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:50:05.196 [2024-11-20 14:11:02.259063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:50:05.196 [2024-11-20 14:11:02.259074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:05.196 [2024-11-20 14:11:02.259175] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:50:05.196 [2024-11-20 14:11:02.259189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:50:05.196 [2024-11-20 14:11:02.259203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:05.196 [2024-11-20 14:11:02.259213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:05.196 [2024-11-20 14:11:02.259226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:50:05.196 [2024-11-20 14:11:02.259235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:50:05.196 [2024-11-20 14:11:02.259247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:50:05.196 [2024-11-20 14:11:02.259257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:50:05.196 [2024-11-20 14:11:02.259269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:50:05.196 [2024-11-20 14:11:02.259278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:05.196 [2024-11-20 14:11:02.259290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:50:05.196 [2024-11-20 14:11:02.259300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:50:05.196 [2024-11-20 14:11:02.259312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:05.196 [2024-11-20 14:11:02.259321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:50:05.196 [2024-11-20 14:11:02.259334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:50:05.196 [2024-11-20 14:11:02.259343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:05.196 [2024-11-20 14:11:02.259360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:50:05.196 [2024-11-20 14:11:02.259370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:50:05.196 [2024-11-20 14:11:02.259384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:05.196 [2024-11-20 14:11:02.259394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:50:05.196 [2024-11-20 14:11:02.259405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:50:05.196 [2024-11-20 14:11:02.259415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:05.196 [2024-11-20 14:11:02.259427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:50:05.196 [2024-11-20 14:11:02.259436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:50:05.196 [2024-11-20 14:11:02.259448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:05.196 [2024-11-20 14:11:02.259458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:50:05.196 [2024-11-20 14:11:02.259469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:50:05.196 [2024-11-20 14:11:02.259489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:05.196 [2024-11-20 14:11:02.259502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:50:05.196 [2024-11-20 14:11:02.259512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:50:05.196 [2024-11-20 14:11:02.259524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:05.196 [2024-11-20 14:11:02.259533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:50:05.196 [2024-11-20 14:11:02.259547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:50:05.196 [2024-11-20 14:11:02.259556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:05.196 [2024-11-20 14:11:02.259569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:50:05.196 [2024-11-20 14:11:02.259578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:50:05.196 [2024-11-20 14:11:02.259589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:05.196 [2024-11-20 14:11:02.259598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:50:05.196 [2024-11-20 14:11:02.259610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:50:05.196 [2024-11-20 14:11:02.259631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:05.196 [2024-11-20 14:11:02.259643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:50:05.196 [2024-11-20 14:11:02.259652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:50:05.196 [2024-11-20 14:11:02.259664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:05.196 [2024-11-20 14:11:02.259673] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:50:05.196 [2024-11-20 14:11:02.259686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:50:05.196 [2024-11-20 14:11:02.259696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:05.196 [2024-11-20 14:11:02.259711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:05.196 [2024-11-20 14:11:02.259721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:50:05.196 [2024-11-20 14:11:02.259736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:50:05.196 [2024-11-20 14:11:02.259746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:50:05.196 [2024-11-20 14:11:02.259758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:50:05.196 [2024-11-20 14:11:02.259768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:50:05.196 [2024-11-20 14:11:02.259780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:50:05.196 [2024-11-20 14:11:02.259794] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:50:05.196 [2024-11-20 14:11:02.259810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:05.196 [2024-11-20 14:11:02.259825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:50:05.196 [2024-11-20 14:11:02.259838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:50:05.196 [2024-11-20 14:11:02.259849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:50:05.196 [2024-11-20 14:11:02.259861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:50:05.196 [2024-11-20 14:11:02.259872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:50:05.196 [2024-11-20 14:11:02.259884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:50:05.196 [2024-11-20 14:11:02.259896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:50:05.196 [2024-11-20 14:11:02.259908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:50:05.196 [2024-11-20 14:11:02.259918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:50:05.196 [2024-11-20 14:11:02.259933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:50:05.196 [2024-11-20 14:11:02.259944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:50:05.196 [2024-11-20 14:11:02.259957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:50:05.196 [2024-11-20 14:11:02.259968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:50:05.196 [2024-11-20 14:11:02.259982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:50:05.196 [2024-11-20 14:11:02.259992] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:50:05.196 [2024-11-20 14:11:02.260006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:05.196 [2024-11-20 14:11:02.260017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:05.196 [2024-11-20 14:11:02.260030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:50:05.196 [2024-11-20 14:11:02.260041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:50:05.196 [2024-11-20 14:11:02.260053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:50:05.196 [2024-11-20 14:11:02.260064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:05.196 [2024-11-20 14:11:02.260077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:50:05.196 [2024-11-20 14:11:02.260088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:50:05.196 [2024-11-20 14:11:02.260100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:05.196 [2024-11-20 14:11:02.260144] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:50:05.196 [2024-11-20 14:11:02.260163] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:50:08.496 [2024-11-20 14:11:05.144931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.496 [2024-11-20 14:11:05.144990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:50:08.496 [2024-11-20 14:11:05.145006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2884.769 ms 00:50:08.496 [2024-11-20 14:11:05.145020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.496 [2024-11-20 14:11:05.187289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.496 [2024-11-20 14:11:05.187349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:08.496 [2024-11-20 14:11:05.187365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.941 ms 00:50:08.496 [2024-11-20 14:11:05.187379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.496 [2024-11-20 14:11:05.187549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.496 [2024-11-20 14:11:05.187566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:50:08.496 [2024-11-20 14:11:05.187578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:50:08.496 [2024-11-20 14:11:05.187598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.496 [2024-11-20 14:11:05.238311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.496 [2024-11-20 14:11:05.238368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:08.496 [2024-11-20 14:11:05.238385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.628 ms 00:50:08.496 [2024-11-20 14:11:05.238399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.496 [2024-11-20 14:11:05.238448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.496 [2024-11-20 14:11:05.238467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:08.496 [2024-11-20 14:11:05.238487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:50:08.496 [2024-11-20 14:11:05.238501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.496 [2024-11-20 14:11:05.239014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.496 [2024-11-20 14:11:05.239044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:08.496 [2024-11-20 14:11:05.239060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:50:08.496 [2024-11-20 14:11:05.239074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.496 [2024-11-20 14:11:05.239185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.496 [2024-11-20 14:11:05.239200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:08.496 [2024-11-20 14:11:05.239215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:50:08.496 [2024-11-20 14:11:05.239231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.496 [2024-11-20 14:11:05.261202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.496 [2024-11-20 14:11:05.261262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:08.496 [2024-11-20 14:11:05.261279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.948 ms 00:50:08.496 [2024-11-20 14:11:05.261293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.496 [2024-11-20 14:11:05.287747] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:50:08.496 [2024-11-20 14:11:05.290984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.496 [2024-11-20 14:11:05.291020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:50:08.496 [2024-11-20 14:11:05.291038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.542 ms 00:50:08.496 [2024-11-20 14:11:05.291049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.496 [2024-11-20 14:11:05.377111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.496 [2024-11-20 14:11:05.377179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:50:08.496 [2024-11-20 14:11:05.377215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.009 ms 00:50:08.496 [2024-11-20 14:11:05.377227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.496 [2024-11-20 14:11:05.377434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.496 [2024-11-20 14:11:05.377451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:50:08.496 [2024-11-20 14:11:05.377467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:50:08.496 [2024-11-20 14:11:05.377477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.496 [2024-11-20 14:11:05.414905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.496 [2024-11-20 14:11:05.414957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:50:08.496 [2024-11-20 14:11:05.414975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.355 ms 00:50:08.496 [2024-11-20 14:11:05.414986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.496 [2024-11-20 14:11:05.452979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.496 [2024-11-20 14:11:05.453041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:50:08.496 [2024-11-20 14:11:05.453061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.938 ms 00:50:08.496 [2024-11-20 14:11:05.453071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.496 [2024-11-20 14:11:05.453775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.496 [2024-11-20 14:11:05.453804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:50:08.497 [2024-11-20 14:11:05.453819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:50:08.497 [2024-11-20 14:11:05.453832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.497 [2024-11-20 14:11:05.562196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.497 [2024-11-20 14:11:05.562258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:50:08.497 [2024-11-20 14:11:05.562282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.295 ms 00:50:08.497 [2024-11-20 14:11:05.562293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.497 [2024-11-20 14:11:05.601918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.497 [2024-11-20 14:11:05.601973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:50:08.497 [2024-11-20 14:11:05.601993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.540 ms 00:50:08.497 [2024-11-20 14:11:05.602004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.497 [2024-11-20 14:11:05.640360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.497 [2024-11-20 14:11:05.640413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:50:08.497 [2024-11-20 14:11:05.640431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.301 ms 00:50:08.497 [2024-11-20 14:11:05.640442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.497 [2024-11-20 14:11:05.678554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.497 [2024-11-20 14:11:05.678605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:50:08.497 [2024-11-20 14:11:05.678623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.076 ms 00:50:08.497 [2024-11-20 14:11:05.678650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.497 [2024-11-20 14:11:05.678684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.497 [2024-11-20 14:11:05.678695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:50:08.497 [2024-11-20 14:11:05.678712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:50:08.497 [2024-11-20 14:11:05.678722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.497 [2024-11-20 14:11:05.678829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:08.497 [2024-11-20 14:11:05.678842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:50:08.497 [2024-11-20 14:11:05.678860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:50:08.497 [2024-11-20 14:11:05.678870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:08.497 [2024-11-20 14:11:05.679985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3437.042 ms, result 0 00:50:08.497 { 00:50:08.497 "name": "ftl0", 00:50:08.497 "uuid": "0bcf76a6-8843-4978-96b3-1dfc317e1622" 00:50:08.497 } 00:50:08.497 14:11:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:50:08.497 14:11:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:50:08.756 14:11:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:50:08.756 14:11:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:50:08.756 14:11:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:50:09.015 /dev/nbd0 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:50:09.015 1+0 records in 00:50:09.015 1+0 records out 00:50:09.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447447 s, 9.2 MB/s 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:50:09.015 14:11:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:50:09.274 [2024-11-20 14:11:06.437616] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:50:09.274 [2024-11-20 14:11:06.437822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81723 ] 00:50:09.533 [2024-11-20 14:11:06.638371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:09.791 [2024-11-20 14:11:06.870922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:11.167  [2024-11-20T14:11:09.426Z] Copying: 192/1024 [MB] (192 MBps) [2024-11-20T14:11:10.361Z] Copying: 383/1024 [MB] (191 MBps) [2024-11-20T14:11:11.297Z] Copying: 579/1024 [MB] (196 MBps) [2024-11-20T14:11:12.233Z] Copying: 749/1024 [MB] (169 MBps) [2024-11-20T14:11:13.173Z] Copying: 892/1024 [MB] (142 MBps) [2024-11-20T14:11:14.109Z] Copying: 1024/1024 [MB] (average 179 MBps) 00:50:16.786 00:50:17.045 14:11:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:50:18.951 14:11:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:50:18.951 [2024-11-20 14:11:16.067303] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:50:18.951 [2024-11-20 14:11:16.067541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81821 ] 00:50:18.951 [2024-11-20 14:11:16.261911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:19.211 [2024-11-20 14:11:16.404644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:20.590  [2024-11-20T14:11:18.849Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-20T14:11:19.784Z] Copying: 34/1024 [MB] (16 MBps) [2024-11-20T14:11:21.162Z] Copying: 52/1024 [MB] (17 MBps) [2024-11-20T14:11:22.099Z] Copying: 70/1024 [MB] (17 MBps) [2024-11-20T14:11:23.035Z] Copying: 87/1024 [MB] (17 MBps) [2024-11-20T14:11:23.972Z] Copying: 105/1024 [MB] (17 MBps) [2024-11-20T14:11:24.908Z] Copying: 123/1024 [MB] (18 MBps) [2024-11-20T14:11:25.844Z] Copying: 141/1024 [MB] (17 MBps) [2024-11-20T14:11:26.821Z] Copying: 159/1024 [MB] (18 MBps) [2024-11-20T14:11:27.756Z] Copying: 177/1024 [MB] (17 MBps) [2024-11-20T14:11:29.134Z] Copying: 194/1024 [MB] (17 MBps) [2024-11-20T14:11:30.069Z] Copying: 211/1024 [MB] (16 MBps) [2024-11-20T14:11:31.005Z] Copying: 228/1024 [MB] (16 MBps) [2024-11-20T14:11:31.940Z] Copying: 244/1024 [MB] (16 MBps) [2024-11-20T14:11:32.875Z] Copying: 260/1024 [MB] (16 MBps) [2024-11-20T14:11:33.810Z] Copying: 277/1024 [MB] (16 MBps) [2024-11-20T14:11:35.187Z] Copying: 294/1024 [MB] (16 MBps) [2024-11-20T14:11:35.756Z] Copying: 310/1024 [MB] (16 MBps) [2024-11-20T14:11:37.180Z] Copying: 326/1024 [MB] (16 MBps) [2024-11-20T14:11:38.116Z] Copying: 343/1024 [MB] (17 MBps) [2024-11-20T14:11:39.052Z] Copying: 361/1024 [MB] (17 MBps) [2024-11-20T14:11:39.988Z] Copying: 380/1024 [MB] (19 MBps) [2024-11-20T14:11:40.922Z] Copying: 398/1024 [MB] (17 MBps) [2024-11-20T14:11:41.857Z] Copying: 416/1024 [MB] (18 MBps) [2024-11-20T14:11:42.793Z] Copying: 435/1024 [MB] (18 MBps) [2024-11-20T14:11:44.171Z] Copying: 454/1024 [MB] (18 MBps) [2024-11-20T14:11:45.106Z] Copying: 472/1024 [MB] (18 MBps) [2024-11-20T14:11:46.042Z] Copying: 492/1024 [MB] (19 MBps) [2024-11-20T14:11:46.975Z] Copying: 510/1024 [MB] (18 MBps) [2024-11-20T14:11:47.912Z] Copying: 529/1024 [MB] (18 MBps) [2024-11-20T14:11:48.847Z] Copying: 547/1024 [MB] (18 MBps) [2024-11-20T14:11:49.782Z] Copying: 566/1024 [MB] (18 MBps) [2024-11-20T14:11:50.773Z] Copying: 585/1024 [MB] (19 MBps) [2024-11-20T14:11:52.164Z] Copying: 604/1024 [MB] (18 MBps) [2024-11-20T14:11:53.101Z] Copying: 622/1024 [MB] (18 MBps) [2024-11-20T14:11:54.039Z] Copying: 641/1024 [MB] (18 MBps) [2024-11-20T14:11:54.975Z] Copying: 659/1024 [MB] (18 MBps) [2024-11-20T14:11:55.909Z] Copying: 678/1024 [MB] (18 MBps) [2024-11-20T14:11:56.843Z] Copying: 697/1024 [MB] (18 MBps) [2024-11-20T14:11:57.778Z] Copying: 715/1024 [MB] (18 MBps) [2024-11-20T14:11:59.171Z] Copying: 733/1024 [MB] (17 MBps) [2024-11-20T14:12:00.105Z] Copying: 751/1024 [MB] (18 MBps) [2024-11-20T14:12:01.040Z] Copying: 769/1024 [MB] (18 MBps) [2024-11-20T14:12:01.976Z] Copying: 788/1024 [MB] (18 MBps) [2024-11-20T14:12:02.935Z] Copying: 804/1024 [MB] (16 MBps) [2024-11-20T14:12:03.869Z] Copying: 823/1024 [MB] (18 MBps) [2024-11-20T14:12:04.849Z] Copying: 842/1024 [MB] (18 MBps) [2024-11-20T14:12:05.787Z] Copying: 861/1024 [MB] (18 MBps) [2024-11-20T14:12:07.163Z] Copying: 880/1024 [MB] (18 MBps) [2024-11-20T14:12:08.101Z] Copying: 899/1024 [MB] (18 MBps) [2024-11-20T14:12:09.037Z] Copying: 918/1024 [MB] (19 MBps) [2024-11-20T14:12:09.971Z] Copying: 937/1024 [MB] (19 MBps) [2024-11-20T14:12:10.904Z] Copying: 956/1024 [MB] (19 MBps) [2024-11-20T14:12:11.843Z] Copying: 975/1024 [MB] (18 MBps) [2024-11-20T14:12:12.779Z] Copying: 994/1024 [MB] (18 MBps) [2024-11-20T14:12:13.347Z] Copying: 1013/1024 [MB] (18 MBps) [2024-11-20T14:12:14.722Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:51:17.399 00:51:17.399 14:12:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:51:17.399 14:12:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:51:17.657 14:12:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:51:17.916 [2024-11-20 14:12:15.208468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:17.916 [2024-11-20 14:12:15.208847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:51:17.916 [2024-11-20 14:12:15.208878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:51:17.916 [2024-11-20 14:12:15.208896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:17.916 [2024-11-20 14:12:15.208950] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:51:17.916 [2024-11-20 14:12:15.214005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:17.916 [2024-11-20 14:12:15.214053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:51:17.916 [2024-11-20 14:12:15.214073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.022 ms 00:51:17.916 [2024-11-20 14:12:15.214086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:17.916 [2024-11-20 14:12:15.216025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:17.916 [2024-11-20 14:12:15.216214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:51:17.916 [2024-11-20 14:12:15.216247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.886 ms 00:51:17.916 [2024-11-20 14:12:15.216271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:17.916 [2024-11-20 14:12:15.231708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:17.916 [2024-11-20 14:12:15.231797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:51:17.916 [2024-11-20 14:12:15.231819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.378 ms 00:51:17.916 [2024-11-20 14:12:15.231832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.176 [2024-11-20 14:12:15.238180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:18.176 [2024-11-20 14:12:15.238256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:51:18.176 [2024-11-20 14:12:15.238277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.274 ms 00:51:18.176 [2024-11-20 14:12:15.238290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.176 [2024-11-20 14:12:15.286109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:18.176 [2024-11-20 14:12:15.286192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:51:18.176 [2024-11-20 14:12:15.286216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.655 ms 00:51:18.176 [2024-11-20 14:12:15.286229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.176 [2024-11-20 14:12:15.313738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:18.176 [2024-11-20 14:12:15.313826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:51:18.176 [2024-11-20 14:12:15.313865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.401 ms 00:51:18.176 [2024-11-20 14:12:15.313879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.176 [2024-11-20 14:12:15.314140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:18.176 [2024-11-20 14:12:15.314159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:51:18.176 [2024-11-20 14:12:15.314175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:51:18.176 [2024-11-20 14:12:15.314188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.176 [2024-11-20 14:12:15.362513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:18.176 [2024-11-20 14:12:15.362613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:51:18.176 [2024-11-20 14:12:15.362637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.285 ms 00:51:18.176 [2024-11-20 14:12:15.362649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.176 [2024-11-20 14:12:15.410561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:18.176 [2024-11-20 14:12:15.410646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:51:18.176 [2024-11-20 14:12:15.410671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.808 ms 00:51:18.177 [2024-11-20 14:12:15.410684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.177 [2024-11-20 14:12:15.458120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:18.177 [2024-11-20 14:12:15.458214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:51:18.177 [2024-11-20 14:12:15.458239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.323 ms 00:51:18.177 [2024-11-20 14:12:15.458253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.437 [2024-11-20 14:12:15.505981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:18.437 [2024-11-20 14:12:15.506063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:51:18.437 [2024-11-20 14:12:15.506087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.499 ms 00:51:18.437 [2024-11-20 14:12:15.506099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.437 [2024-11-20 14:12:15.506189] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:51:18.437 [2024-11-20 14:12:15.506211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.506996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.507009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.507025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.507038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.507055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.507068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.507087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.507100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.507133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.507146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.507162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.507175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.507200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.507214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:51:18.437 [2024-11-20 14:12:15.507230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:51:18.438 [2024-11-20 14:12:15.507803] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:51:18.438 [2024-11-20 14:12:15.507819] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bcf76a6-8843-4978-96b3-1dfc317e1622 00:51:18.438 [2024-11-20 14:12:15.507832] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:51:18.438 [2024-11-20 14:12:15.507850] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:51:18.438 [2024-11-20 14:12:15.507863] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:51:18.438 [2024-11-20 14:12:15.507883] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:51:18.438 [2024-11-20 14:12:15.507895] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:51:18.438 [2024-11-20 14:12:15.507910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:51:18.438 [2024-11-20 14:12:15.507922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:51:18.438 [2024-11-20 14:12:15.507936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:51:18.438 [2024-11-20 14:12:15.507947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:51:18.438 [2024-11-20 14:12:15.507963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:18.438 [2024-11-20 14:12:15.507975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:51:18.438 [2024-11-20 14:12:15.507991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:51:18.438 [2024-11-20 14:12:15.508003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.438 [2024-11-20 14:12:15.533124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:18.438 [2024-11-20 14:12:15.533464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:51:18.438 [2024-11-20 14:12:15.533523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.003 ms 00:51:18.438 [2024-11-20 14:12:15.533538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.438 [2024-11-20 14:12:15.534227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:18.438 [2024-11-20 14:12:15.534248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:51:18.438 [2024-11-20 14:12:15.534265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:51:18.438 [2024-11-20 14:12:15.534278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.438 [2024-11-20 14:12:15.616063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:18.438 [2024-11-20 14:12:15.616135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:18.438 [2024-11-20 14:12:15.616158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:18.438 [2024-11-20 14:12:15.616171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.438 [2024-11-20 14:12:15.616263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:18.438 [2024-11-20 14:12:15.616277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:18.438 [2024-11-20 14:12:15.616293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:18.438 [2024-11-20 14:12:15.616305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.438 [2024-11-20 14:12:15.616459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:18.438 [2024-11-20 14:12:15.616503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:18.438 [2024-11-20 14:12:15.616521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:18.438 [2024-11-20 14:12:15.616533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.438 [2024-11-20 14:12:15.616564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:18.438 [2024-11-20 14:12:15.616578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:18.438 [2024-11-20 14:12:15.616594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:18.438 [2024-11-20 14:12:15.616606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.698 [2024-11-20 14:12:15.771115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:18.698 [2024-11-20 14:12:15.771191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:18.698 [2024-11-20 14:12:15.771223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:18.698 [2024-11-20 14:12:15.771236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.698 [2024-11-20 14:12:15.898640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:18.698 [2024-11-20 14:12:15.898717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:18.698 [2024-11-20 14:12:15.898739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:18.698 [2024-11-20 14:12:15.898752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.698 [2024-11-20 14:12:15.898895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:18.698 [2024-11-20 14:12:15.898911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:18.699 [2024-11-20 14:12:15.898932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:18.699 [2024-11-20 14:12:15.898944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.699 [2024-11-20 14:12:15.899026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:18.699 [2024-11-20 14:12:15.899041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:18.699 [2024-11-20 14:12:15.899059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:18.699 [2024-11-20 14:12:15.899080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.699 [2024-11-20 14:12:15.899247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:18.699 [2024-11-20 14:12:15.899265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:18.699 [2024-11-20 14:12:15.899281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:18.699 [2024-11-20 14:12:15.899297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.699 [2024-11-20 14:12:15.899347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:18.699 [2024-11-20 14:12:15.899362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:51:18.699 [2024-11-20 14:12:15.899378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:18.699 [2024-11-20 14:12:15.899390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.699 [2024-11-20 14:12:15.899439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:18.699 [2024-11-20 14:12:15.899452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:18.699 [2024-11-20 14:12:15.899467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:18.699 [2024-11-20 14:12:15.899506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.699 [2024-11-20 14:12:15.899587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:18.699 [2024-11-20 14:12:15.899603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:18.699 [2024-11-20 14:12:15.899618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:18.699 [2024-11-20 14:12:15.899631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:18.699 [2024-11-20 14:12:15.899787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 691.278 ms, result 0 00:51:18.699 true 00:51:18.699 14:12:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81571 00:51:18.699 14:12:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81571 00:51:18.699 14:12:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:51:18.959 [2024-11-20 14:12:16.043601] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:51:18.959 [2024-11-20 14:12:16.044072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82420 ] 00:51:18.959 [2024-11-20 14:12:16.231639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:19.217 [2024-11-20 14:12:16.373080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:20.596  [2024-11-20T14:12:18.858Z] Copying: 160/1024 [MB] (160 MBps) [2024-11-20T14:12:19.793Z] Copying: 322/1024 [MB] (161 MBps) [2024-11-20T14:12:21.168Z] Copying: 484/1024 [MB] (161 MBps) [2024-11-20T14:12:22.104Z] Copying: 644/1024 [MB] (160 MBps) [2024-11-20T14:12:23.038Z] Copying: 796/1024 [MB] (151 MBps) [2024-11-20T14:12:23.297Z] Copying: 947/1024 [MB] (150 MBps) [2024-11-20T14:12:24.699Z] Copying: 1024/1024 [MB] (average 157 MBps) 00:51:27.376 00:51:27.376 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81571 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:51:27.376 14:12:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:51:27.634 [2024-11-20 14:12:24.771508] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:51:27.634 [2024-11-20 14:12:24.771712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82502 ] 00:51:27.892 [2024-11-20 14:12:24.976364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:27.892 [2024-11-20 14:12:25.156192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:28.458 [2024-11-20 14:12:25.612159] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:51:28.458 [2024-11-20 14:12:25.612507] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:51:28.458 [2024-11-20 14:12:25.680973] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:51:28.458 [2024-11-20 14:12:25.681339] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:51:28.458 [2024-11-20 14:12:25.681582] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:51:28.717 [2024-11-20 14:12:25.914690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.717 [2024-11-20 14:12:25.914766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:51:28.717 [2024-11-20 14:12:25.914787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:51:28.717 [2024-11-20 14:12:25.914800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.717 [2024-11-20 14:12:25.914877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.717 [2024-11-20 14:12:25.914892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:28.717 [2024-11-20 14:12:25.914905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:51:28.717 [2024-11-20 14:12:25.914917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.717 [2024-11-20 14:12:25.914943] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:51:28.717 [2024-11-20 14:12:25.916324] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:51:28.717 [2024-11-20 14:12:25.916368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.717 [2024-11-20 14:12:25.916383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:28.717 [2024-11-20 14:12:25.916396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.429 ms 00:51:28.717 [2024-11-20 14:12:25.916408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.717 [2024-11-20 14:12:25.918092] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:51:28.717 [2024-11-20 14:12:25.942596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.717 [2024-11-20 14:12:25.942699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:51:28.717 [2024-11-20 14:12:25.942719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.500 ms 00:51:28.717 [2024-11-20 14:12:25.942732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.717 [2024-11-20 14:12:25.942851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.717 [2024-11-20 14:12:25.942868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:51:28.717 [2024-11-20 14:12:25.942882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:51:28.717 [2024-11-20 14:12:25.942895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.717 [2024-11-20 14:12:25.951141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.717 [2024-11-20 14:12:25.951208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:28.717 [2024-11-20 14:12:25.951227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.102 ms 00:51:28.717 [2024-11-20 14:12:25.951240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.717 [2024-11-20 14:12:25.951368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.717 [2024-11-20 14:12:25.951387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:28.717 [2024-11-20 14:12:25.951401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:51:28.717 [2024-11-20 14:12:25.951414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.717 [2024-11-20 14:12:25.951526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.717 [2024-11-20 14:12:25.951543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:51:28.718 [2024-11-20 14:12:25.951556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:51:28.718 [2024-11-20 14:12:25.951568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.718 [2024-11-20 14:12:25.951611] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:51:28.718 [2024-11-20 14:12:25.957516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.718 [2024-11-20 14:12:25.957574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:28.718 [2024-11-20 14:12:25.957591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.911 ms 00:51:28.718 [2024-11-20 14:12:25.957603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.718 [2024-11-20 14:12:25.957666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.718 [2024-11-20 14:12:25.957680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:51:28.718 [2024-11-20 14:12:25.957694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:51:28.718 [2024-11-20 14:12:25.957708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.718 [2024-11-20 14:12:25.957793] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:51:28.718 [2024-11-20 14:12:25.957822] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:51:28.718 [2024-11-20 14:12:25.957865] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:51:28.718 [2024-11-20 14:12:25.957888] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:51:28.718 [2024-11-20 14:12:25.957999] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:51:28.718 [2024-11-20 14:12:25.958015] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:51:28.718 [2024-11-20 14:12:25.958032] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:51:28.718 [2024-11-20 14:12:25.958048] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:51:28.718 [2024-11-20 14:12:25.958066] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:51:28.718 [2024-11-20 14:12:25.958080] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:51:28.718 [2024-11-20 14:12:25.958092] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:51:28.718 [2024-11-20 14:12:25.958104] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:51:28.718 [2024-11-20 14:12:25.958116] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:51:28.718 [2024-11-20 14:12:25.958129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.718 [2024-11-20 14:12:25.958141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:51:28.718 [2024-11-20 14:12:25.958154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:51:28.718 [2024-11-20 14:12:25.958165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.718 [2024-11-20 14:12:25.958257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.718 [2024-11-20 14:12:25.958275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:51:28.718 [2024-11-20 14:12:25.958288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:51:28.718 [2024-11-20 14:12:25.958300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.718 [2024-11-20 14:12:25.958417] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:51:28.718 [2024-11-20 14:12:25.958435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:51:28.718 [2024-11-20 14:12:25.958447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:51:28.718 [2024-11-20 14:12:25.958460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:28.718 [2024-11-20 14:12:25.958472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:51:28.718 [2024-11-20 14:12:25.958506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:51:28.718 [2024-11-20 14:12:25.958518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:51:28.718 [2024-11-20 14:12:25.958532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:51:28.718 [2024-11-20 14:12:25.958544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:51:28.718 [2024-11-20 14:12:25.958564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:51:28.718 [2024-11-20 14:12:25.958585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:51:28.718 [2024-11-20 14:12:25.958621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:51:28.718 [2024-11-20 14:12:25.958637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:51:28.718 [2024-11-20 14:12:25.958656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:51:28.718 [2024-11-20 14:12:25.958678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:51:28.718 [2024-11-20 14:12:25.958698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:28.718 [2024-11-20 14:12:25.958714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:51:28.718 [2024-11-20 14:12:25.958732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:51:28.718 [2024-11-20 14:12:25.958751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:28.718 [2024-11-20 14:12:25.958772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:51:28.718 [2024-11-20 14:12:25.958792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:51:28.718 [2024-11-20 14:12:25.958813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:28.718 [2024-11-20 14:12:25.958833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:51:28.718 [2024-11-20 14:12:25.958853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:51:28.718 [2024-11-20 14:12:25.958872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:28.718 [2024-11-20 14:12:25.958893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:51:28.718 [2024-11-20 14:12:25.958914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:51:28.718 [2024-11-20 14:12:25.958935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:28.718 [2024-11-20 14:12:25.958956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:51:28.718 [2024-11-20 14:12:25.958979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:51:28.718 [2024-11-20 14:12:25.959000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:28.718 [2024-11-20 14:12:25.959019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:51:28.718 [2024-11-20 14:12:25.959039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:51:28.718 [2024-11-20 14:12:25.959061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:51:28.718 [2024-11-20 14:12:25.959082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:51:28.718 [2024-11-20 14:12:25.959104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:51:28.718 [2024-11-20 14:12:25.959125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:51:28.718 [2024-11-20 14:12:25.959147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:51:28.718 [2024-11-20 14:12:25.959169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:51:28.718 [2024-11-20 14:12:25.959190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:28.718 [2024-11-20 14:12:25.959210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:51:28.718 [2024-11-20 14:12:25.959224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:51:28.718 [2024-11-20 14:12:25.959240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:28.718 [2024-11-20 14:12:25.959261] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:51:28.718 [2024-11-20 14:12:25.959285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:51:28.718 [2024-11-20 14:12:25.959308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:51:28.718 [2024-11-20 14:12:25.959347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:28.718 [2024-11-20 14:12:25.959372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:51:28.718 [2024-11-20 14:12:25.959393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:51:28.718 [2024-11-20 14:12:25.959411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:51:28.718 [2024-11-20 14:12:25.959429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:51:28.718 [2024-11-20 14:12:25.959447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:51:28.718 [2024-11-20 14:12:25.959466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:51:28.718 [2024-11-20 14:12:25.959506] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:51:28.718 [2024-11-20 14:12:25.959538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:28.718 [2024-11-20 14:12:25.959563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:51:28.718 [2024-11-20 14:12:25.959600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:51:28.718 [2024-11-20 14:12:25.959623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:51:28.718 [2024-11-20 14:12:25.959645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:51:28.718 [2024-11-20 14:12:25.959668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:51:28.718 [2024-11-20 14:12:25.959693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:51:28.718 [2024-11-20 14:12:25.959717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:51:28.718 [2024-11-20 14:12:25.959737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:51:28.718 [2024-11-20 14:12:25.959760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:51:28.718 [2024-11-20 14:12:25.959783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:51:28.718 [2024-11-20 14:12:25.959807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:51:28.718 [2024-11-20 14:12:25.959837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:51:28.719 [2024-11-20 14:12:25.959860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:51:28.719 [2024-11-20 14:12:25.959883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:51:28.719 [2024-11-20 14:12:25.959905] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:51:28.719 [2024-11-20 14:12:25.959931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:28.719 [2024-11-20 14:12:25.959955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:51:28.719 [2024-11-20 14:12:25.959979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:51:28.719 [2024-11-20 14:12:25.960003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:51:28.719 [2024-11-20 14:12:25.960026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:51:28.719 [2024-11-20 14:12:25.960054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.719 [2024-11-20 14:12:25.960077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:51:28.719 [2024-11-20 14:12:25.960103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.702 ms 00:51:28.719 [2024-11-20 14:12:25.960140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.719 [2024-11-20 14:12:26.007131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.719 [2024-11-20 14:12:26.007435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:28.719 [2024-11-20 14:12:26.007556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.893 ms 00:51:28.719 [2024-11-20 14:12:26.007631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.719 [2024-11-20 14:12:26.007830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.719 [2024-11-20 14:12:26.007948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:51:28.719 [2024-11-20 14:12:26.008037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:51:28.719 [2024-11-20 14:12:26.008154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.977 [2024-11-20 14:12:26.075201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.977 [2024-11-20 14:12:26.075522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:28.977 [2024-11-20 14:12:26.075656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.883 ms 00:51:28.977 [2024-11-20 14:12:26.075702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.977 [2024-11-20 14:12:26.075858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.977 [2024-11-20 14:12:26.075904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:28.977 [2024-11-20 14:12:26.076101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:51:28.977 [2024-11-20 14:12:26.076120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.977 [2024-11-20 14:12:26.076721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.977 [2024-11-20 14:12:26.076751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:28.977 [2024-11-20 14:12:26.076769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:51:28.977 [2024-11-20 14:12:26.076785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.977 [2024-11-20 14:12:26.076950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.977 [2024-11-20 14:12:26.076970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:28.977 [2024-11-20 14:12:26.076986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:51:28.977 [2024-11-20 14:12:26.077002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.977 [2024-11-20 14:12:26.101367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.977 [2024-11-20 14:12:26.101432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:28.977 [2024-11-20 14:12:26.101454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.332 ms 00:51:28.977 [2024-11-20 14:12:26.101469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.977 [2024-11-20 14:12:26.127413] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:51:28.977 [2024-11-20 14:12:26.127501] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:51:28.977 [2024-11-20 14:12:26.127524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.977 [2024-11-20 14:12:26.127537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:51:28.977 [2024-11-20 14:12:26.127554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.831 ms 00:51:28.977 [2024-11-20 14:12:26.127567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.977 [2024-11-20 14:12:26.168623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.977 [2024-11-20 14:12:26.168940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:51:28.977 [2024-11-20 14:12:26.168996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.945 ms 00:51:28.977 [2024-11-20 14:12:26.169010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.977 [2024-11-20 14:12:26.193347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.977 [2024-11-20 14:12:26.193423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:51:28.977 [2024-11-20 14:12:26.193441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.246 ms 00:51:28.977 [2024-11-20 14:12:26.193454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.977 [2024-11-20 14:12:26.217180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.977 [2024-11-20 14:12:26.217257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:51:28.977 [2024-11-20 14:12:26.217276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.625 ms 00:51:28.977 [2024-11-20 14:12:26.217290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:28.977 [2024-11-20 14:12:26.218330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:28.977 [2024-11-20 14:12:26.218373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:51:28.977 [2024-11-20 14:12:26.218390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:51:28.977 [2024-11-20 14:12:26.218402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:29.235 [2024-11-20 14:12:26.325441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:29.235 [2024-11-20 14:12:26.325545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:51:29.235 [2024-11-20 14:12:26.325567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.005 ms 00:51:29.235 [2024-11-20 14:12:26.325582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:29.235 [2024-11-20 14:12:26.341992] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:51:29.235 [2024-11-20 14:12:26.345685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:29.235 [2024-11-20 14:12:26.345744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:51:29.235 [2024-11-20 14:12:26.345763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.016 ms 00:51:29.235 [2024-11-20 14:12:26.345777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:29.235 [2024-11-20 14:12:26.345939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:29.235 [2024-11-20 14:12:26.345956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:51:29.235 [2024-11-20 14:12:26.345970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:51:29.235 [2024-11-20 14:12:26.345983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:29.235 [2024-11-20 14:12:26.346075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:29.235 [2024-11-20 14:12:26.346090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:51:29.235 [2024-11-20 14:12:26.346103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:51:29.235 [2024-11-20 14:12:26.346115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:29.235 [2024-11-20 14:12:26.346143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:29.235 [2024-11-20 14:12:26.346162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:51:29.235 [2024-11-20 14:12:26.346174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:51:29.235 [2024-11-20 14:12:26.346187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:29.235 [2024-11-20 14:12:26.346223] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:51:29.235 [2024-11-20 14:12:26.346237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:29.235 [2024-11-20 14:12:26.346250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:51:29.235 [2024-11-20 14:12:26.346262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:51:29.235 [2024-11-20 14:12:26.346273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:29.235 [2024-11-20 14:12:26.397838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:29.235 [2024-11-20 14:12:26.397941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:51:29.235 [2024-11-20 14:12:26.397963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.524 ms 00:51:29.235 [2024-11-20 14:12:26.397977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:29.235 [2024-11-20 14:12:26.398115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:29.235 [2024-11-20 14:12:26.398132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:51:29.235 [2024-11-20 14:12:26.398146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:51:29.235 [2024-11-20 14:12:26.398159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:29.235 [2024-11-20 14:12:26.399562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 484.304 ms, result 0 00:51:30.169  [2024-11-20T14:12:28.427Z] Copying: 34/1024 [MB] (34 MBps) [2024-11-20T14:12:29.814Z] Copying: 70/1024 [MB] (36 MBps) [2024-11-20T14:12:30.750Z] Copying: 101/1024 [MB] (30 MBps) [2024-11-20T14:12:31.687Z] Copying: 137/1024 [MB] (36 MBps) [2024-11-20T14:12:32.624Z] Copying: 171/1024 [MB] (33 MBps) [2024-11-20T14:12:33.559Z] Copying: 204/1024 [MB] (33 MBps) [2024-11-20T14:12:34.493Z] Copying: 239/1024 [MB] (35 MBps) [2024-11-20T14:12:35.428Z] Copying: 273/1024 [MB] (33 MBps) [2024-11-20T14:12:36.803Z] Copying: 309/1024 [MB] (35 MBps) [2024-11-20T14:12:37.737Z] Copying: 345/1024 [MB] (36 MBps) [2024-11-20T14:12:38.669Z] Copying: 381/1024 [MB] (36 MBps) [2024-11-20T14:12:39.601Z] Copying: 415/1024 [MB] (33 MBps) [2024-11-20T14:12:40.535Z] Copying: 448/1024 [MB] (32 MBps) [2024-11-20T14:12:41.468Z] Copying: 484/1024 [MB] (36 MBps) [2024-11-20T14:12:42.491Z] Copying: 521/1024 [MB] (36 MBps) [2024-11-20T14:12:43.428Z] Copying: 557/1024 [MB] (36 MBps) [2024-11-20T14:12:44.806Z] Copying: 593/1024 [MB] (35 MBps) [2024-11-20T14:12:45.742Z] Copying: 625/1024 [MB] (32 MBps) [2024-11-20T14:12:46.677Z] Copying: 662/1024 [MB] (37 MBps) [2024-11-20T14:12:47.609Z] Copying: 696/1024 [MB] (33 MBps) [2024-11-20T14:12:48.562Z] Copying: 715/1024 [MB] (18 MBps) [2024-11-20T14:12:49.498Z] Copying: 750/1024 [MB] (35 MBps) [2024-11-20T14:12:50.432Z] Copying: 785/1024 [MB] (35 MBps) [2024-11-20T14:12:51.807Z] Copying: 818/1024 [MB] (32 MBps) [2024-11-20T14:12:52.741Z] Copying: 853/1024 [MB] (35 MBps) [2024-11-20T14:12:53.675Z] Copying: 889/1024 [MB] (35 MBps) [2024-11-20T14:12:54.613Z] Copying: 923/1024 [MB] (34 MBps) [2024-11-20T14:12:55.563Z] Copying: 957/1024 [MB] (33 MBps) [2024-11-20T14:12:56.497Z] Copying: 988/1024 [MB] (31 MBps) [2024-11-20T14:12:57.433Z] Copying: 1019/1024 [MB] (31 MBps) [2024-11-20T14:12:57.999Z] Copying: 1048280/1048576 [kB] (4320 kBps) [2024-11-20T14:12:57.999Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-20 14:12:57.779864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.676 [2024-11-20 14:12:57.779971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:52:00.676 [2024-11-20 14:12:57.779995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:52:00.676 [2024-11-20 14:12:57.780009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:00.676 [2024-11-20 14:12:57.782383] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:52:00.676 [2024-11-20 14:12:57.790801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.676 [2024-11-20 14:12:57.790882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:52:00.676 [2024-11-20 14:12:57.790902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.345 ms 00:52:00.676 [2024-11-20 14:12:57.790916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:00.676 [2024-11-20 14:12:57.803071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.676 [2024-11-20 14:12:57.803144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:52:00.676 [2024-11-20 14:12:57.803163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.940 ms 00:52:00.676 [2024-11-20 14:12:57.803176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:00.676 [2024-11-20 14:12:57.828877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.676 [2024-11-20 14:12:57.829193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:52:00.676 [2024-11-20 14:12:57.829242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.670 ms 00:52:00.676 [2024-11-20 14:12:57.829267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:00.676 [2024-11-20 14:12:57.835686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.676 [2024-11-20 14:12:57.835993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:52:00.676 [2024-11-20 14:12:57.836037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.347 ms 00:52:00.676 [2024-11-20 14:12:57.836060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:00.676 [2024-11-20 14:12:57.884129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.676 [2024-11-20 14:12:57.884207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:52:00.676 [2024-11-20 14:12:57.884227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.952 ms 00:52:00.676 [2024-11-20 14:12:57.884240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:00.676 [2024-11-20 14:12:57.911314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.676 [2024-11-20 14:12:57.911687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:52:00.676 [2024-11-20 14:12:57.911736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.991 ms 00:52:00.676 [2024-11-20 14:12:57.911759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:00.676 [2024-11-20 14:12:57.986562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.676 [2024-11-20 14:12:57.986894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:52:00.676 [2024-11-20 14:12:57.986962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.695 ms 00:52:00.676 [2024-11-20 14:12:57.986984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:00.935 [2024-11-20 14:12:58.035585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.935 [2024-11-20 14:12:58.035674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:52:00.935 [2024-11-20 14:12:58.035694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.546 ms 00:52:00.935 [2024-11-20 14:12:58.035707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:00.935 [2024-11-20 14:12:58.083534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.935 [2024-11-20 14:12:58.083642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:52:00.935 [2024-11-20 14:12:58.083665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.739 ms 00:52:00.935 [2024-11-20 14:12:58.083678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:00.935 [2024-11-20 14:12:58.132477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.935 [2024-11-20 14:12:58.132605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:52:00.935 [2024-11-20 14:12:58.132635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.704 ms 00:52:00.935 [2024-11-20 14:12:58.132659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:00.935 [2024-11-20 14:12:58.180559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.935 [2024-11-20 14:12:58.180651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:52:00.935 [2024-11-20 14:12:58.180671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.692 ms 00:52:00.935 [2024-11-20 14:12:58.180688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:00.935 [2024-11-20 14:12:58.180780] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:52:00.935 [2024-11-20 14:12:58.180803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129280 / 261120 wr_cnt: 1 state: open 00:52:00.935 [2024-11-20 14:12:58.180819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:52:00.935 [2024-11-20 14:12:58.180833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:52:00.935 [2024-11-20 14:12:58.180846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:52:00.935 [2024-11-20 14:12:58.180860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:52:00.935 [2024-11-20 14:12:58.180873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:52:00.935 [2024-11-20 14:12:58.180887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:52:00.935 [2024-11-20 14:12:58.180900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:52:00.935 [2024-11-20 14:12:58.180913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:52:00.935 [2024-11-20 14:12:58.180925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:52:00.935 [2024-11-20 14:12:58.180938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:52:00.935 [2024-11-20 14:12:58.180951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:52:00.935 [2024-11-20 14:12:58.180964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.180977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.180990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.181995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.182008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.182020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.182033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.182046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:52:00.936 [2024-11-20 14:12:58.182059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:52:00.937 [2024-11-20 14:12:58.182072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:52:00.937 [2024-11-20 14:12:58.182086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:52:00.937 [2024-11-20 14:12:58.182103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:52:00.937 [2024-11-20 14:12:58.182115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:52:00.937 [2024-11-20 14:12:58.182129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:52:00.937 [2024-11-20 14:12:58.182152] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:52:00.937 [2024-11-20 14:12:58.182164] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bcf76a6-8843-4978-96b3-1dfc317e1622 00:52:00.937 [2024-11-20 14:12:58.182177] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129280 00:52:00.937 [2024-11-20 14:12:58.182200] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130240 00:52:00.937 [2024-11-20 14:12:58.182227] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129280 00:52:00.937 [2024-11-20 14:12:58.182240] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:52:00.937 [2024-11-20 14:12:58.182252] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:52:00.937 [2024-11-20 14:12:58.182264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:52:00.937 [2024-11-20 14:12:58.182276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:52:00.937 [2024-11-20 14:12:58.182287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:52:00.937 [2024-11-20 14:12:58.182298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:52:00.937 [2024-11-20 14:12:58.182315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.937 [2024-11-20 14:12:58.182335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:52:00.937 [2024-11-20 14:12:58.182355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:52:00.937 [2024-11-20 14:12:58.182368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:00.937 [2024-11-20 14:12:58.207640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.937 [2024-11-20 14:12:58.207720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:52:00.937 [2024-11-20 14:12:58.207748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.175 ms 00:52:00.937 [2024-11-20 14:12:58.207770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:00.937 [2024-11-20 14:12:58.208539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:00.937 [2024-11-20 14:12:58.208656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:52:00.937 [2024-11-20 14:12:58.208678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:52:00.937 [2024-11-20 14:12:58.208703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.195 [2024-11-20 14:12:58.273239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:01.195 [2024-11-20 14:12:58.273320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:52:01.195 [2024-11-20 14:12:58.273339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:01.195 [2024-11-20 14:12:58.273352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.195 [2024-11-20 14:12:58.273452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:01.195 [2024-11-20 14:12:58.273468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:52:01.195 [2024-11-20 14:12:58.273499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:01.195 [2024-11-20 14:12:58.273517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.195 [2024-11-20 14:12:58.273620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:01.195 [2024-11-20 14:12:58.273636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:52:01.195 [2024-11-20 14:12:58.273650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:01.195 [2024-11-20 14:12:58.273663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.195 [2024-11-20 14:12:58.273684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:01.195 [2024-11-20 14:12:58.273712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:52:01.195 [2024-11-20 14:12:58.273724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:01.195 [2024-11-20 14:12:58.273735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.195 [2024-11-20 14:12:58.427292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:01.195 [2024-11-20 14:12:58.427396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:52:01.195 [2024-11-20 14:12:58.427421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:01.195 [2024-11-20 14:12:58.427439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.468 [2024-11-20 14:12:58.549850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:01.468 [2024-11-20 14:12:58.549944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:52:01.468 [2024-11-20 14:12:58.549969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:01.468 [2024-11-20 14:12:58.549988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.468 [2024-11-20 14:12:58.550137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:01.468 [2024-11-20 14:12:58.550160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:52:01.468 [2024-11-20 14:12:58.550179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:01.468 [2024-11-20 14:12:58.550197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.468 [2024-11-20 14:12:58.550267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:01.468 [2024-11-20 14:12:58.550287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:52:01.468 [2024-11-20 14:12:58.550306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:01.468 [2024-11-20 14:12:58.550324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.468 [2024-11-20 14:12:58.550530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:01.468 [2024-11-20 14:12:58.550558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:52:01.468 [2024-11-20 14:12:58.550580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:01.468 [2024-11-20 14:12:58.550599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.468 [2024-11-20 14:12:58.550652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:01.468 [2024-11-20 14:12:58.550667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:52:01.468 [2024-11-20 14:12:58.550680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:01.468 [2024-11-20 14:12:58.550691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.468 [2024-11-20 14:12:58.550735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:01.468 [2024-11-20 14:12:58.550754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:52:01.468 [2024-11-20 14:12:58.550767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:01.468 [2024-11-20 14:12:58.550779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.468 [2024-11-20 14:12:58.550826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:01.468 [2024-11-20 14:12:58.550840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:52:01.468 [2024-11-20 14:12:58.550853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:01.468 [2024-11-20 14:12:58.550865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.468 [2024-11-20 14:12:58.551041] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 772.082 ms, result 0 00:52:03.548 00:52:03.548 00:52:03.548 14:13:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:52:06.075 14:13:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:52:06.075 [2024-11-20 14:13:03.004340] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:52:06.075 [2024-11-20 14:13:03.004564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82877 ] 00:52:06.075 [2024-11-20 14:13:03.211202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:06.333 [2024-11-20 14:13:03.405774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:06.591 [2024-11-20 14:13:03.893118] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:52:06.591 [2024-11-20 14:13:03.893203] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:52:06.850 [2024-11-20 14:13:04.062788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.850 [2024-11-20 14:13:04.062878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:52:06.850 [2024-11-20 14:13:04.062907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:52:06.850 [2024-11-20 14:13:04.062925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:06.850 [2024-11-20 14:13:04.063025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.850 [2024-11-20 14:13:04.063046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:52:06.850 [2024-11-20 14:13:04.063072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:52:06.850 [2024-11-20 14:13:04.063093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:06.850 [2024-11-20 14:13:04.063133] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:52:06.850 [2024-11-20 14:13:04.064583] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:52:06.850 [2024-11-20 14:13:04.064645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.850 [2024-11-20 14:13:04.064661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:52:06.850 [2024-11-20 14:13:04.064676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.518 ms 00:52:06.850 [2024-11-20 14:13:04.064688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:06.850 [2024-11-20 14:13:04.066553] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:52:06.850 [2024-11-20 14:13:04.090575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.850 [2024-11-20 14:13:04.090665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:52:06.850 [2024-11-20 14:13:04.090686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.022 ms 00:52:06.850 [2024-11-20 14:13:04.090700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:06.850 [2024-11-20 14:13:04.090831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.850 [2024-11-20 14:13:04.090848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:52:06.850 [2024-11-20 14:13:04.090862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:52:06.850 [2024-11-20 14:13:04.090875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:06.850 [2024-11-20 14:13:04.098878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.851 [2024-11-20 14:13:04.098941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:52:06.851 [2024-11-20 14:13:04.098958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.870 ms 00:52:06.851 [2024-11-20 14:13:04.098977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:06.851 [2024-11-20 14:13:04.099085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.851 [2024-11-20 14:13:04.099104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:52:06.851 [2024-11-20 14:13:04.099118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:52:06.851 [2024-11-20 14:13:04.099130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:06.851 [2024-11-20 14:13:04.099191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.851 [2024-11-20 14:13:04.099216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:52:06.851 [2024-11-20 14:13:04.099229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:52:06.851 [2024-11-20 14:13:04.099241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:06.851 [2024-11-20 14:13:04.099278] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:52:06.851 [2024-11-20 14:13:04.105243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.851 [2024-11-20 14:13:04.105309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:52:06.851 [2024-11-20 14:13:04.105326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.975 ms 00:52:06.851 [2024-11-20 14:13:04.105344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:06.851 [2024-11-20 14:13:04.105396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.851 [2024-11-20 14:13:04.105410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:52:06.851 [2024-11-20 14:13:04.105423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:52:06.851 [2024-11-20 14:13:04.105436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:06.851 [2024-11-20 14:13:04.105542] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:52:06.851 [2024-11-20 14:13:04.105577] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:52:06.851 [2024-11-20 14:13:04.105621] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:52:06.851 [2024-11-20 14:13:04.105647] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:52:06.851 [2024-11-20 14:13:04.105759] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:52:06.851 [2024-11-20 14:13:04.105776] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:52:06.851 [2024-11-20 14:13:04.105792] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:52:06.851 [2024-11-20 14:13:04.105808] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:52:06.851 [2024-11-20 14:13:04.105823] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:52:06.851 [2024-11-20 14:13:04.105837] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:52:06.851 [2024-11-20 14:13:04.105849] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:52:06.851 [2024-11-20 14:13:04.105861] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:52:06.851 [2024-11-20 14:13:04.105877] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:52:06.851 [2024-11-20 14:13:04.105890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.851 [2024-11-20 14:13:04.105903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:52:06.851 [2024-11-20 14:13:04.105916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:52:06.851 [2024-11-20 14:13:04.105928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:06.851 [2024-11-20 14:13:04.106021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.851 [2024-11-20 14:13:04.106035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:52:06.851 [2024-11-20 14:13:04.106048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:52:06.851 [2024-11-20 14:13:04.106060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:06.851 [2024-11-20 14:13:04.106182] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:52:06.851 [2024-11-20 14:13:04.106201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:52:06.851 [2024-11-20 14:13:04.106213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:52:06.851 [2024-11-20 14:13:04.106227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:06.851 [2024-11-20 14:13:04.106239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:52:06.851 [2024-11-20 14:13:04.106251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:52:06.851 [2024-11-20 14:13:04.106262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:52:06.851 [2024-11-20 14:13:04.106275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:52:06.851 [2024-11-20 14:13:04.106287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:52:06.851 [2024-11-20 14:13:04.106298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:52:06.851 [2024-11-20 14:13:04.106310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:52:06.851 [2024-11-20 14:13:04.106321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:52:06.851 [2024-11-20 14:13:04.106333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:52:06.851 [2024-11-20 14:13:04.106344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:52:06.851 [2024-11-20 14:13:04.106355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:52:06.851 [2024-11-20 14:13:04.106378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:06.851 [2024-11-20 14:13:04.106389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:52:06.851 [2024-11-20 14:13:04.106401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:52:06.851 [2024-11-20 14:13:04.106412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:06.851 [2024-11-20 14:13:04.106424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:52:06.851 [2024-11-20 14:13:04.106436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:52:06.851 [2024-11-20 14:13:04.106447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:06.851 [2024-11-20 14:13:04.106458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:52:06.851 [2024-11-20 14:13:04.106470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:52:06.851 [2024-11-20 14:13:04.106495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:06.851 [2024-11-20 14:13:04.106507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:52:06.851 [2024-11-20 14:13:04.106520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:52:06.851 [2024-11-20 14:13:04.106532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:06.851 [2024-11-20 14:13:04.106543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:52:06.851 [2024-11-20 14:13:04.106555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:52:06.851 [2024-11-20 14:13:04.106567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:06.851 [2024-11-20 14:13:04.106580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:52:06.851 [2024-11-20 14:13:04.106591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:52:06.851 [2024-11-20 14:13:04.106602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:52:06.851 [2024-11-20 14:13:04.106613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:52:06.851 [2024-11-20 14:13:04.106625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:52:06.851 [2024-11-20 14:13:04.106636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:52:06.851 [2024-11-20 14:13:04.106647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:52:06.851 [2024-11-20 14:13:04.106658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:52:06.851 [2024-11-20 14:13:04.106670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:06.851 [2024-11-20 14:13:04.106681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:52:06.851 [2024-11-20 14:13:04.106692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:52:06.851 [2024-11-20 14:13:04.106704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:06.851 [2024-11-20 14:13:04.106716] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:52:06.851 [2024-11-20 14:13:04.106728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:52:06.851 [2024-11-20 14:13:04.106740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:52:06.851 [2024-11-20 14:13:04.106752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:06.851 [2024-11-20 14:13:04.106764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:52:06.851 [2024-11-20 14:13:04.106776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:52:06.851 [2024-11-20 14:13:04.106787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:52:06.851 [2024-11-20 14:13:04.106799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:52:06.851 [2024-11-20 14:13:04.106810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:52:06.851 [2024-11-20 14:13:04.106821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:52:06.851 [2024-11-20 14:13:04.106834] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:52:06.851 [2024-11-20 14:13:04.106849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:06.851 [2024-11-20 14:13:04.106863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:52:06.851 [2024-11-20 14:13:04.106876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:52:06.851 [2024-11-20 14:13:04.106890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:52:06.851 [2024-11-20 14:13:04.106902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:52:06.851 [2024-11-20 14:13:04.106916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:52:06.851 [2024-11-20 14:13:04.106929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:52:06.852 [2024-11-20 14:13:04.106942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:52:06.852 [2024-11-20 14:13:04.106955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:52:06.852 [2024-11-20 14:13:04.106968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:52:06.852 [2024-11-20 14:13:04.106982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:52:06.852 [2024-11-20 14:13:04.106995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:52:06.852 [2024-11-20 14:13:04.107007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:52:06.852 [2024-11-20 14:13:04.107019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:52:06.852 [2024-11-20 14:13:04.107032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:52:06.852 [2024-11-20 14:13:04.107045] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:52:06.852 [2024-11-20 14:13:04.107064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:06.852 [2024-11-20 14:13:04.107078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:52:06.852 [2024-11-20 14:13:04.107090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:52:06.852 [2024-11-20 14:13:04.107103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:52:06.852 [2024-11-20 14:13:04.107116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:52:06.852 [2024-11-20 14:13:04.107130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.852 [2024-11-20 14:13:04.107142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:52:06.852 [2024-11-20 14:13:04.107155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:52:06.852 [2024-11-20 14:13:04.107166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:06.852 [2024-11-20 14:13:04.150719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.852 [2024-11-20 14:13:04.151019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:52:06.852 [2024-11-20 14:13:04.151051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.492 ms 00:52:06.852 [2024-11-20 14:13:04.151065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:06.852 [2024-11-20 14:13:04.151193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:06.852 [2024-11-20 14:13:04.151207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:52:06.852 [2024-11-20 14:13:04.151220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:52:06.852 [2024-11-20 14:13:04.151232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.110 [2024-11-20 14:13:04.217964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.110 [2024-11-20 14:13:04.218031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:52:07.110 [2024-11-20 14:13:04.218051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.636 ms 00:52:07.110 [2024-11-20 14:13:04.218064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.110 [2024-11-20 14:13:04.218135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.110 [2024-11-20 14:13:04.218149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:52:07.110 [2024-11-20 14:13:04.218168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:52:07.110 [2024-11-20 14:13:04.218180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.110 [2024-11-20 14:13:04.218767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.110 [2024-11-20 14:13:04.218787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:52:07.110 [2024-11-20 14:13:04.218800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:52:07.111 [2024-11-20 14:13:04.218812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.111 [2024-11-20 14:13:04.218967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.111 [2024-11-20 14:13:04.218992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:52:07.111 [2024-11-20 14:13:04.219006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:52:07.111 [2024-11-20 14:13:04.219026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.111 [2024-11-20 14:13:04.242129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.111 [2024-11-20 14:13:04.242198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:52:07.111 [2024-11-20 14:13:04.242222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.074 ms 00:52:07.111 [2024-11-20 14:13:04.242235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.111 [2024-11-20 14:13:04.266435] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:52:07.111 [2024-11-20 14:13:04.266527] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:52:07.111 [2024-11-20 14:13:04.266561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.111 [2024-11-20 14:13:04.266576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:52:07.111 [2024-11-20 14:13:04.266601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.160 ms 00:52:07.111 [2024-11-20 14:13:04.266613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.111 [2024-11-20 14:13:04.304506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.111 [2024-11-20 14:13:04.304601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:52:07.111 [2024-11-20 14:13:04.304622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.806 ms 00:52:07.111 [2024-11-20 14:13:04.304635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.111 [2024-11-20 14:13:04.328472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.111 [2024-11-20 14:13:04.328575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:52:07.111 [2024-11-20 14:13:04.328595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.743 ms 00:52:07.111 [2024-11-20 14:13:04.328608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.111 [2024-11-20 14:13:04.351888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.111 [2024-11-20 14:13:04.351985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:52:07.111 [2024-11-20 14:13:04.352004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.205 ms 00:52:07.111 [2024-11-20 14:13:04.352017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.111 [2024-11-20 14:13:04.353057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.111 [2024-11-20 14:13:04.353093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:52:07.111 [2024-11-20 14:13:04.353108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:52:07.111 [2024-11-20 14:13:04.353126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.369 [2024-11-20 14:13:04.458568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.369 [2024-11-20 14:13:04.458671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:52:07.369 [2024-11-20 14:13:04.458707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.401 ms 00:52:07.369 [2024-11-20 14:13:04.458720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.369 [2024-11-20 14:13:04.474880] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:52:07.369 [2024-11-20 14:13:04.478599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.369 [2024-11-20 14:13:04.478655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:52:07.369 [2024-11-20 14:13:04.478672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.784 ms 00:52:07.369 [2024-11-20 14:13:04.478686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.369 [2024-11-20 14:13:04.478847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.369 [2024-11-20 14:13:04.478866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:52:07.369 [2024-11-20 14:13:04.478880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:52:07.369 [2024-11-20 14:13:04.478898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.369 [2024-11-20 14:13:04.480831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.369 [2024-11-20 14:13:04.480880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:52:07.369 [2024-11-20 14:13:04.480896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.871 ms 00:52:07.369 [2024-11-20 14:13:04.480908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.369 [2024-11-20 14:13:04.480955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.369 [2024-11-20 14:13:04.480969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:52:07.369 [2024-11-20 14:13:04.480982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:52:07.369 [2024-11-20 14:13:04.480993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.369 [2024-11-20 14:13:04.481041] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:52:07.369 [2024-11-20 14:13:04.481056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.369 [2024-11-20 14:13:04.481068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:52:07.369 [2024-11-20 14:13:04.481081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:52:07.369 [2024-11-20 14:13:04.481093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.369 [2024-11-20 14:13:04.527930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.369 [2024-11-20 14:13:04.527998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:52:07.369 [2024-11-20 14:13:04.528018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.809 ms 00:52:07.369 [2024-11-20 14:13:04.528039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.369 [2024-11-20 14:13:04.528156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:07.369 [2024-11-20 14:13:04.528172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:52:07.369 [2024-11-20 14:13:04.528186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:52:07.369 [2024-11-20 14:13:04.528198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:07.369 [2024-11-20 14:13:04.532034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 467.610 ms, result 0 00:52:08.744  [2024-11-20T14:13:07.004Z] Copying: 924/1048576 [kB] (924 kBps) [2024-11-20T14:13:07.939Z] Copying: 4340/1048576 [kB] (3416 kBps) [2024-11-20T14:13:08.873Z] Copying: 27/1024 [MB] (23 MBps) [2024-11-20T14:13:09.808Z] Copying: 62/1024 [MB] (35 MBps) [2024-11-20T14:13:11.204Z] Copying: 101/1024 [MB] (38 MBps) [2024-11-20T14:13:12.175Z] Copying: 140/1024 [MB] (39 MBps) [2024-11-20T14:13:13.110Z] Copying: 178/1024 [MB] (37 MBps) [2024-11-20T14:13:14.046Z] Copying: 215/1024 [MB] (37 MBps) [2024-11-20T14:13:14.977Z] Copying: 254/1024 [MB] (39 MBps) [2024-11-20T14:13:15.917Z] Copying: 294/1024 [MB] (39 MBps) [2024-11-20T14:13:16.863Z] Copying: 335/1024 [MB] (40 MBps) [2024-11-20T14:13:18.239Z] Copying: 375/1024 [MB] (39 MBps) [2024-11-20T14:13:19.173Z] Copying: 415/1024 [MB] (40 MBps) [2024-11-20T14:13:20.108Z] Copying: 450/1024 [MB] (35 MBps) [2024-11-20T14:13:21.046Z] Copying: 485/1024 [MB] (34 MBps) [2024-11-20T14:13:21.980Z] Copying: 520/1024 [MB] (35 MBps) [2024-11-20T14:13:22.913Z] Copying: 553/1024 [MB] (32 MBps) [2024-11-20T14:13:23.873Z] Copying: 591/1024 [MB] (38 MBps) [2024-11-20T14:13:25.249Z] Copying: 630/1024 [MB] (39 MBps) [2024-11-20T14:13:25.816Z] Copying: 668/1024 [MB] (37 MBps) [2024-11-20T14:13:27.191Z] Copying: 707/1024 [MB] (39 MBps) [2024-11-20T14:13:28.127Z] Copying: 747/1024 [MB] (39 MBps) [2024-11-20T14:13:29.063Z] Copying: 785/1024 [MB] (38 MBps) [2024-11-20T14:13:30.001Z] Copying: 821/1024 [MB] (36 MBps) [2024-11-20T14:13:31.008Z] Copying: 859/1024 [MB] (37 MBps) [2024-11-20T14:13:31.944Z] Copying: 899/1024 [MB] (40 MBps) [2024-11-20T14:13:32.881Z] Copying: 942/1024 [MB] (42 MBps) [2024-11-20T14:13:33.838Z] Copying: 984/1024 [MB] (41 MBps) [2024-11-20T14:13:34.776Z] Copying: 1024/1024 [MB] (average 35 MBps)[2024-11-20 14:13:34.532193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.453 [2024-11-20 14:13:34.532313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:52:37.453 [2024-11-20 14:13:34.532342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:52:37.453 [2024-11-20 14:13:34.532362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.453 [2024-11-20 14:13:34.532406] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:52:37.453 [2024-11-20 14:13:34.539318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.453 [2024-11-20 14:13:34.539394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:52:37.453 [2024-11-20 14:13:34.539421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.874 ms 00:52:37.453 [2024-11-20 14:13:34.539441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.453 [2024-11-20 14:13:34.539844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.453 [2024-11-20 14:13:34.539881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:52:37.453 [2024-11-20 14:13:34.539915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:52:37.453 [2024-11-20 14:13:34.539939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.453 [2024-11-20 14:13:34.552759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.453 [2024-11-20 14:13:34.552844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:52:37.453 [2024-11-20 14:13:34.552872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.780 ms 00:52:37.453 [2024-11-20 14:13:34.552893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.453 [2024-11-20 14:13:34.562935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.453 [2024-11-20 14:13:34.563032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:52:37.453 [2024-11-20 14:13:34.563089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.980 ms 00:52:37.453 [2024-11-20 14:13:34.563122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.453 [2024-11-20 14:13:34.629870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.453 [2024-11-20 14:13:34.629960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:52:37.453 [2024-11-20 14:13:34.629988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.584 ms 00:52:37.453 [2024-11-20 14:13:34.630008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.453 [2024-11-20 14:13:34.662464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.453 [2024-11-20 14:13:34.662781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:52:37.453 [2024-11-20 14:13:34.662824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.362 ms 00:52:37.453 [2024-11-20 14:13:34.662845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.453 [2024-11-20 14:13:34.665007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.453 [2024-11-20 14:13:34.665092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:52:37.453 [2024-11-20 14:13:34.665133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.075 ms 00:52:37.453 [2024-11-20 14:13:34.665167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.453 [2024-11-20 14:13:34.724416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.453 [2024-11-20 14:13:34.724540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:52:37.453 [2024-11-20 14:13:34.724569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.178 ms 00:52:37.453 [2024-11-20 14:13:34.724590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.713 [2024-11-20 14:13:34.782353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.713 [2024-11-20 14:13:34.782660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:52:37.713 [2024-11-20 14:13:34.782722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.685 ms 00:52:37.713 [2024-11-20 14:13:34.782742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.713 [2024-11-20 14:13:34.840198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.713 [2024-11-20 14:13:34.840281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:52:37.713 [2024-11-20 14:13:34.840308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.381 ms 00:52:37.713 [2024-11-20 14:13:34.840328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.713 [2024-11-20 14:13:34.899222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.713 [2024-11-20 14:13:34.899309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:52:37.713 [2024-11-20 14:13:34.899340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.741 ms 00:52:37.713 [2024-11-20 14:13:34.899363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.713 [2024-11-20 14:13:34.899445] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:52:37.713 [2024-11-20 14:13:34.899503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:52:37.713 [2024-11-20 14:13:34.899549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:52:37.713 [2024-11-20 14:13:34.899579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:52:37.713 [2024-11-20 14:13:34.899605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:52:37.713 [2024-11-20 14:13:34.899630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:52:37.713 [2024-11-20 14:13:34.899655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:52:37.713 [2024-11-20 14:13:34.899680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:52:37.713 [2024-11-20 14:13:34.899705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:52:37.713 [2024-11-20 14:13:34.899731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.899757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.899781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.899806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.899831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.899856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.899880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.899905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.899930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.899955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.899980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.900985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:52:37.714 [2024-11-20 14:13:34.901737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:52:37.715 [2024-11-20 14:13:34.901773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:52:37.715 [2024-11-20 14:13:34.901806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:52:37.715 [2024-11-20 14:13:34.901850] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:52:37.715 [2024-11-20 14:13:34.901886] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bcf76a6-8843-4978-96b3-1dfc317e1622 00:52:37.715 [2024-11-20 14:13:34.901917] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:52:37.715 [2024-11-20 14:13:34.901962] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135360 00:52:37.715 [2024-11-20 14:13:34.901991] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133376 00:52:37.715 [2024-11-20 14:13:34.902031] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:52:37.715 [2024-11-20 14:13:34.902066] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:52:37.715 [2024-11-20 14:13:34.902096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:52:37.715 [2024-11-20 14:13:34.902125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:52:37.715 [2024-11-20 14:13:34.902175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:52:37.715 [2024-11-20 14:13:34.902204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:52:37.715 [2024-11-20 14:13:34.902235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.715 [2024-11-20 14:13:34.902265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:52:37.715 [2024-11-20 14:13:34.902296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.791 ms 00:52:37.715 [2024-11-20 14:13:34.902327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.715 [2024-11-20 14:13:34.926633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.715 [2024-11-20 14:13:34.926702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:52:37.715 [2024-11-20 14:13:34.926719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.200 ms 00:52:37.715 [2024-11-20 14:13:34.926732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.715 [2024-11-20 14:13:34.927272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:37.715 [2024-11-20 14:13:34.927294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:52:37.715 [2024-11-20 14:13:34.927310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:52:37.715 [2024-11-20 14:13:34.927325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.715 [2024-11-20 14:13:34.981872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:37.715 [2024-11-20 14:13:34.981933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:52:37.715 [2024-11-20 14:13:34.981952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:37.715 [2024-11-20 14:13:34.981965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.715 [2024-11-20 14:13:34.982039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:37.715 [2024-11-20 14:13:34.982053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:52:37.715 [2024-11-20 14:13:34.982066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:37.715 [2024-11-20 14:13:34.982078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.715 [2024-11-20 14:13:34.982200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:37.715 [2024-11-20 14:13:34.982216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:52:37.715 [2024-11-20 14:13:34.982229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:37.715 [2024-11-20 14:13:34.982241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.715 [2024-11-20 14:13:34.982262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:37.715 [2024-11-20 14:13:34.982275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:52:37.715 [2024-11-20 14:13:34.982287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:37.715 [2024-11-20 14:13:34.982299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.974 [2024-11-20 14:13:35.113972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:37.974 [2024-11-20 14:13:35.114036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:52:37.974 [2024-11-20 14:13:35.114055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:37.974 [2024-11-20 14:13:35.114067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.974 [2024-11-20 14:13:35.220466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:37.974 [2024-11-20 14:13:35.220549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:52:37.974 [2024-11-20 14:13:35.220567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:37.974 [2024-11-20 14:13:35.220580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.974 [2024-11-20 14:13:35.220687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:37.974 [2024-11-20 14:13:35.220706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:52:37.974 [2024-11-20 14:13:35.220720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:37.974 [2024-11-20 14:13:35.220732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.974 [2024-11-20 14:13:35.220782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:37.974 [2024-11-20 14:13:35.220797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:52:37.974 [2024-11-20 14:13:35.220809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:37.974 [2024-11-20 14:13:35.220820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.974 [2024-11-20 14:13:35.220941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:37.974 [2024-11-20 14:13:35.220957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:52:37.974 [2024-11-20 14:13:35.220975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:37.974 [2024-11-20 14:13:35.220986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.974 [2024-11-20 14:13:35.221030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:37.974 [2024-11-20 14:13:35.221045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:52:37.974 [2024-11-20 14:13:35.221058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:37.974 [2024-11-20 14:13:35.221071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.974 [2024-11-20 14:13:35.221113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:37.974 [2024-11-20 14:13:35.221126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:52:37.974 [2024-11-20 14:13:35.221138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:37.974 [2024-11-20 14:13:35.221155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.974 [2024-11-20 14:13:35.221203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:37.974 [2024-11-20 14:13:35.221216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:52:37.974 [2024-11-20 14:13:35.221229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:37.974 [2024-11-20 14:13:35.221241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:37.974 [2024-11-20 14:13:35.221373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 689.150 ms, result 0 00:52:39.348 00:52:39.348 00:52:39.349 14:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:52:41.252 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:52:41.252 14:13:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:52:41.252 [2024-11-20 14:13:38.451245] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:52:41.252 [2024-11-20 14:13:38.451446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83221 ] 00:52:41.512 [2024-11-20 14:13:38.656299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:41.512 [2024-11-20 14:13:38.828351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:42.084 [2024-11-20 14:13:39.262548] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:52:42.084 [2024-11-20 14:13:39.262631] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:52:42.346 [2024-11-20 14:13:39.426919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.346 [2024-11-20 14:13:39.427171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:52:42.346 [2024-11-20 14:13:39.427208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:52:42.346 [2024-11-20 14:13:39.427222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.346 [2024-11-20 14:13:39.427298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.346 [2024-11-20 14:13:39.427314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:52:42.346 [2024-11-20 14:13:39.427333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:52:42.346 [2024-11-20 14:13:39.427345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.346 [2024-11-20 14:13:39.427373] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:52:42.346 [2024-11-20 14:13:39.428805] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:52:42.346 [2024-11-20 14:13:39.428856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.346 [2024-11-20 14:13:39.428870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:52:42.346 [2024-11-20 14:13:39.428882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.489 ms 00:52:42.346 [2024-11-20 14:13:39.428894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.346 [2024-11-20 14:13:39.430525] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:52:42.346 [2024-11-20 14:13:39.452992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.346 [2024-11-20 14:13:39.453217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:52:42.346 [2024-11-20 14:13:39.453258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.466 ms 00:52:42.346 [2024-11-20 14:13:39.453271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.346 [2024-11-20 14:13:39.453360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.346 [2024-11-20 14:13:39.453374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:52:42.346 [2024-11-20 14:13:39.453387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:52:42.346 [2024-11-20 14:13:39.453399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.346 [2024-11-20 14:13:39.461850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.346 [2024-11-20 14:13:39.461922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:52:42.346 [2024-11-20 14:13:39.461949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.327 ms 00:52:42.346 [2024-11-20 14:13:39.461980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.346 [2024-11-20 14:13:39.462140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.346 [2024-11-20 14:13:39.462168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:52:42.346 [2024-11-20 14:13:39.462191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:52:42.346 [2024-11-20 14:13:39.462212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.346 [2024-11-20 14:13:39.462294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.346 [2024-11-20 14:13:39.462312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:52:42.346 [2024-11-20 14:13:39.462329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:52:42.346 [2024-11-20 14:13:39.462344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.346 [2024-11-20 14:13:39.462392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:52:42.346 [2024-11-20 14:13:39.469528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.346 [2024-11-20 14:13:39.469576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:52:42.346 [2024-11-20 14:13:39.469595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.153 ms 00:52:42.346 [2024-11-20 14:13:39.469617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.346 [2024-11-20 14:13:39.469663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.346 [2024-11-20 14:13:39.469678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:52:42.346 [2024-11-20 14:13:39.469693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:52:42.346 [2024-11-20 14:13:39.469712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.346 [2024-11-20 14:13:39.469802] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:52:42.346 [2024-11-20 14:13:39.469843] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:52:42.346 [2024-11-20 14:13:39.469901] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:52:42.346 [2024-11-20 14:13:39.469937] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:52:42.346 [2024-11-20 14:13:39.470064] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:52:42.346 [2024-11-20 14:13:39.470099] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:52:42.346 [2024-11-20 14:13:39.470118] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:52:42.346 [2024-11-20 14:13:39.470137] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:52:42.346 [2024-11-20 14:13:39.470155] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:52:42.346 [2024-11-20 14:13:39.470172] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:52:42.346 [2024-11-20 14:13:39.470187] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:52:42.346 [2024-11-20 14:13:39.470202] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:52:42.346 [2024-11-20 14:13:39.470221] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:52:42.346 [2024-11-20 14:13:39.470237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.346 [2024-11-20 14:13:39.470253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:52:42.346 [2024-11-20 14:13:39.470269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:52:42.346 [2024-11-20 14:13:39.470284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.346 [2024-11-20 14:13:39.470383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.346 [2024-11-20 14:13:39.470399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:52:42.346 [2024-11-20 14:13:39.470415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:52:42.346 [2024-11-20 14:13:39.470430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.346 [2024-11-20 14:13:39.470579] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:52:42.346 [2024-11-20 14:13:39.470608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:52:42.346 [2024-11-20 14:13:39.470629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:52:42.346 [2024-11-20 14:13:39.470650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:42.346 [2024-11-20 14:13:39.470671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:52:42.346 [2024-11-20 14:13:39.470691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:52:42.346 [2024-11-20 14:13:39.470710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:52:42.346 [2024-11-20 14:13:39.470729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:52:42.346 [2024-11-20 14:13:39.470749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:52:42.346 [2024-11-20 14:13:39.470768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:52:42.346 [2024-11-20 14:13:39.470787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:52:42.346 [2024-11-20 14:13:39.470807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:52:42.346 [2024-11-20 14:13:39.470826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:52:42.346 [2024-11-20 14:13:39.470844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:52:42.346 [2024-11-20 14:13:39.470864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:52:42.346 [2024-11-20 14:13:39.470897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:42.346 [2024-11-20 14:13:39.470916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:52:42.346 [2024-11-20 14:13:39.470936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:52:42.347 [2024-11-20 14:13:39.470956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:42.347 [2024-11-20 14:13:39.470976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:52:42.347 [2024-11-20 14:13:39.470996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:52:42.347 [2024-11-20 14:13:39.471015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:42.347 [2024-11-20 14:13:39.471043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:52:42.347 [2024-11-20 14:13:39.471061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:52:42.347 [2024-11-20 14:13:39.471079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:42.347 [2024-11-20 14:13:39.471097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:52:42.347 [2024-11-20 14:13:39.471115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:52:42.347 [2024-11-20 14:13:39.471133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:42.347 [2024-11-20 14:13:39.471151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:52:42.347 [2024-11-20 14:13:39.471169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:52:42.347 [2024-11-20 14:13:39.471187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:42.347 [2024-11-20 14:13:39.471204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:52:42.347 [2024-11-20 14:13:39.471222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:52:42.347 [2024-11-20 14:13:39.471240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:52:42.347 [2024-11-20 14:13:39.471258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:52:42.347 [2024-11-20 14:13:39.471276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:52:42.347 [2024-11-20 14:13:39.471294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:52:42.347 [2024-11-20 14:13:39.471311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:52:42.347 [2024-11-20 14:13:39.471325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:52:42.347 [2024-11-20 14:13:39.471338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:42.347 [2024-11-20 14:13:39.471351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:52:42.347 [2024-11-20 14:13:39.471364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:52:42.347 [2024-11-20 14:13:39.471378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:42.347 [2024-11-20 14:13:39.471390] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:52:42.347 [2024-11-20 14:13:39.471405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:52:42.347 [2024-11-20 14:13:39.471419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:52:42.347 [2024-11-20 14:13:39.471433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:42.347 [2024-11-20 14:13:39.471448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:52:42.347 [2024-11-20 14:13:39.471462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:52:42.347 [2024-11-20 14:13:39.471475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:52:42.347 [2024-11-20 14:13:39.471502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:52:42.347 [2024-11-20 14:13:39.471516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:52:42.347 [2024-11-20 14:13:39.471539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:52:42.347 [2024-11-20 14:13:39.471575] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:52:42.347 [2024-11-20 14:13:39.471593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:42.347 [2024-11-20 14:13:39.471629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:52:42.347 [2024-11-20 14:13:39.471651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:52:42.347 [2024-11-20 14:13:39.471670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:52:42.347 [2024-11-20 14:13:39.471686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:52:42.347 [2024-11-20 14:13:39.471706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:52:42.347 [2024-11-20 14:13:39.471729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:52:42.347 [2024-11-20 14:13:39.471752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:52:42.347 [2024-11-20 14:13:39.471773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:52:42.347 [2024-11-20 14:13:39.471793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:52:42.347 [2024-11-20 14:13:39.471816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:52:42.347 [2024-11-20 14:13:39.471837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:52:42.347 [2024-11-20 14:13:39.471858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:52:42.347 [2024-11-20 14:13:39.471881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:52:42.347 [2024-11-20 14:13:39.471903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:52:42.347 [2024-11-20 14:13:39.471924] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:52:42.347 [2024-11-20 14:13:39.471954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:42.347 [2024-11-20 14:13:39.471977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:52:42.347 [2024-11-20 14:13:39.471999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:52:42.347 [2024-11-20 14:13:39.472021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:52:42.347 [2024-11-20 14:13:39.472042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:52:42.347 [2024-11-20 14:13:39.472060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.347 [2024-11-20 14:13:39.472076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:52:42.347 [2024-11-20 14:13:39.472093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:52:42.347 [2024-11-20 14:13:39.472108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.347 [2024-11-20 14:13:39.514847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.347 [2024-11-20 14:13:39.514917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:52:42.347 [2024-11-20 14:13:39.514934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.663 ms 00:52:42.347 [2024-11-20 14:13:39.514945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.347 [2024-11-20 14:13:39.515074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.347 [2024-11-20 14:13:39.515092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:52:42.347 [2024-11-20 14:13:39.515104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:52:42.347 [2024-11-20 14:13:39.515115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.347 [2024-11-20 14:13:39.572803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.347 [2024-11-20 14:13:39.572857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:52:42.347 [2024-11-20 14:13:39.572874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.601 ms 00:52:42.347 [2024-11-20 14:13:39.572885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.347 [2024-11-20 14:13:39.572944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.347 [2024-11-20 14:13:39.572956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:52:42.347 [2024-11-20 14:13:39.572972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:52:42.347 [2024-11-20 14:13:39.572982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.347 [2024-11-20 14:13:39.573515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.347 [2024-11-20 14:13:39.573532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:52:42.347 [2024-11-20 14:13:39.573543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:52:42.347 [2024-11-20 14:13:39.573554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.347 [2024-11-20 14:13:39.573699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.347 [2024-11-20 14:13:39.573713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:52:42.348 [2024-11-20 14:13:39.573724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:52:42.348 [2024-11-20 14:13:39.573741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.348 [2024-11-20 14:13:39.593860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.348 [2024-11-20 14:13:39.593907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:52:42.348 [2024-11-20 14:13:39.593926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.097 ms 00:52:42.348 [2024-11-20 14:13:39.593937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.348 [2024-11-20 14:13:39.614595] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:52:42.348 [2024-11-20 14:13:39.614769] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:52:42.348 [2024-11-20 14:13:39.614792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.348 [2024-11-20 14:13:39.614804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:52:42.348 [2024-11-20 14:13:39.614817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.720 ms 00:52:42.348 [2024-11-20 14:13:39.614828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.348 [2024-11-20 14:13:39.645789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.348 [2024-11-20 14:13:39.645839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:52:42.348 [2024-11-20 14:13:39.645872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.912 ms 00:52:42.348 [2024-11-20 14:13:39.645897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.348 [2024-11-20 14:13:39.664855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.348 [2024-11-20 14:13:39.665058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:52:42.348 [2024-11-20 14:13:39.665094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.902 ms 00:52:42.348 [2024-11-20 14:13:39.665114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.607 [2024-11-20 14:13:39.685594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.607 [2024-11-20 14:13:39.685647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:52:42.607 [2024-11-20 14:13:39.685663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.421 ms 00:52:42.607 [2024-11-20 14:13:39.685673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.607 [2024-11-20 14:13:39.686653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.607 [2024-11-20 14:13:39.686688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:52:42.607 [2024-11-20 14:13:39.686703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:52:42.607 [2024-11-20 14:13:39.686719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.607 [2024-11-20 14:13:39.780050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.607 [2024-11-20 14:13:39.780132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:52:42.608 [2024-11-20 14:13:39.780163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.294 ms 00:52:42.608 [2024-11-20 14:13:39.780174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.608 [2024-11-20 14:13:39.793884] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:52:42.608 [2024-11-20 14:13:39.797342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.608 [2024-11-20 14:13:39.797376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:52:42.608 [2024-11-20 14:13:39.797393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.069 ms 00:52:42.608 [2024-11-20 14:13:39.797404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.608 [2024-11-20 14:13:39.797534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.608 [2024-11-20 14:13:39.797549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:52:42.608 [2024-11-20 14:13:39.797560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:52:42.608 [2024-11-20 14:13:39.797575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.608 [2024-11-20 14:13:39.798491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.608 [2024-11-20 14:13:39.798533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:52:42.608 [2024-11-20 14:13:39.798546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:52:42.608 [2024-11-20 14:13:39.798557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.608 [2024-11-20 14:13:39.798589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.608 [2024-11-20 14:13:39.798601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:52:42.608 [2024-11-20 14:13:39.798613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:52:42.608 [2024-11-20 14:13:39.798624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.608 [2024-11-20 14:13:39.798666] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:52:42.608 [2024-11-20 14:13:39.798680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.608 [2024-11-20 14:13:39.798691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:52:42.608 [2024-11-20 14:13:39.798702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:52:42.608 [2024-11-20 14:13:39.798713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.608 [2024-11-20 14:13:39.838544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.608 [2024-11-20 14:13:39.838739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:52:42.608 [2024-11-20 14:13:39.838766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.808 ms 00:52:42.608 [2024-11-20 14:13:39.838787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.608 [2024-11-20 14:13:39.838880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:42.608 [2024-11-20 14:13:39.838894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:52:42.608 [2024-11-20 14:13:39.838906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:52:42.608 [2024-11-20 14:13:39.838916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:42.608 [2024-11-20 14:13:39.840252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 412.784 ms, result 0 00:52:43.986  [2024-11-20T14:13:42.243Z] Copying: 34/1024 [MB] (34 MBps) [2024-11-20T14:13:43.178Z] Copying: 69/1024 [MB] (35 MBps) [2024-11-20T14:13:44.113Z] Copying: 101/1024 [MB] (32 MBps) [2024-11-20T14:13:45.490Z] Copying: 134/1024 [MB] (32 MBps) [2024-11-20T14:13:46.425Z] Copying: 168/1024 [MB] (34 MBps) [2024-11-20T14:13:47.359Z] Copying: 204/1024 [MB] (35 MBps) [2024-11-20T14:13:48.301Z] Copying: 237/1024 [MB] (33 MBps) [2024-11-20T14:13:49.235Z] Copying: 269/1024 [MB] (32 MBps) [2024-11-20T14:13:50.169Z] Copying: 301/1024 [MB] (32 MBps) [2024-11-20T14:13:51.104Z] Copying: 333/1024 [MB] (32 MBps) [2024-11-20T14:13:52.527Z] Copying: 367/1024 [MB] (34 MBps) [2024-11-20T14:13:53.096Z] Copying: 401/1024 [MB] (33 MBps) [2024-11-20T14:13:54.472Z] Copying: 433/1024 [MB] (32 MBps) [2024-11-20T14:13:55.480Z] Copying: 465/1024 [MB] (31 MBps) [2024-11-20T14:13:56.415Z] Copying: 499/1024 [MB] (34 MBps) [2024-11-20T14:13:57.351Z] Copying: 535/1024 [MB] (35 MBps) [2024-11-20T14:13:58.287Z] Copying: 570/1024 [MB] (35 MBps) [2024-11-20T14:13:59.224Z] Copying: 606/1024 [MB] (36 MBps) [2024-11-20T14:14:00.160Z] Copying: 637/1024 [MB] (30 MBps) [2024-11-20T14:14:01.097Z] Copying: 671/1024 [MB] (34 MBps) [2024-11-20T14:14:02.475Z] Copying: 707/1024 [MB] (35 MBps) [2024-11-20T14:14:03.411Z] Copying: 741/1024 [MB] (33 MBps) [2024-11-20T14:14:04.364Z] Copying: 775/1024 [MB] (34 MBps) [2024-11-20T14:14:05.301Z] Copying: 808/1024 [MB] (32 MBps) [2024-11-20T14:14:06.239Z] Copying: 841/1024 [MB] (32 MBps) [2024-11-20T14:14:07.198Z] Copying: 874/1024 [MB] (33 MBps) [2024-11-20T14:14:08.134Z] Copying: 911/1024 [MB] (36 MBps) [2024-11-20T14:14:09.512Z] Copying: 944/1024 [MB] (32 MBps) [2024-11-20T14:14:10.445Z] Copying: 980/1024 [MB] (36 MBps) [2024-11-20T14:14:10.445Z] Copying: 1011/1024 [MB] (31 MBps) [2024-11-20T14:14:10.709Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-11-20 14:14:10.578664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.386 [2024-11-20 14:14:10.578973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:53:13.386 [2024-11-20 14:14:10.579118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:53:13.386 [2024-11-20 14:14:10.579244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.386 [2024-11-20 14:14:10.579340] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:53:13.386 [2024-11-20 14:14:10.588364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.386 [2024-11-20 14:14:10.588518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:53:13.386 [2024-11-20 14:14:10.588596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.716 ms 00:53:13.386 [2024-11-20 14:14:10.588648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.386 [2024-11-20 14:14:10.589032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.386 [2024-11-20 14:14:10.589235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:53:13.386 [2024-11-20 14:14:10.589363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:53:13.386 [2024-11-20 14:14:10.589419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.386 [2024-11-20 14:14:10.594275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.386 [2024-11-20 14:14:10.594554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:53:13.386 [2024-11-20 14:14:10.594726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.717 ms 00:53:13.386 [2024-11-20 14:14:10.594796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.386 [2024-11-20 14:14:10.603347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.386 [2024-11-20 14:14:10.603399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:53:13.386 [2024-11-20 14:14:10.603419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.378 ms 00:53:13.386 [2024-11-20 14:14:10.603436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.386 [2024-11-20 14:14:10.662806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.386 [2024-11-20 14:14:10.662917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:53:13.386 [2024-11-20 14:14:10.662958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.238 ms 00:53:13.386 [2024-11-20 14:14:10.662987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.386 [2024-11-20 14:14:10.695741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.386 [2024-11-20 14:14:10.695834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:53:13.386 [2024-11-20 14:14:10.695859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.639 ms 00:53:13.386 [2024-11-20 14:14:10.695876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.386 [2024-11-20 14:14:10.697826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.386 [2024-11-20 14:14:10.698062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:53:13.386 [2024-11-20 14:14:10.698093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.862 ms 00:53:13.386 [2024-11-20 14:14:10.698111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.646 [2024-11-20 14:14:10.758425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.646 [2024-11-20 14:14:10.758517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:53:13.646 [2024-11-20 14:14:10.758542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.275 ms 00:53:13.646 [2024-11-20 14:14:10.758559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.646 [2024-11-20 14:14:10.818496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.646 [2024-11-20 14:14:10.818593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:53:13.646 [2024-11-20 14:14:10.818617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.860 ms 00:53:13.646 [2024-11-20 14:14:10.818634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.646 [2024-11-20 14:14:10.875281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.646 [2024-11-20 14:14:10.875349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:53:13.646 [2024-11-20 14:14:10.875366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.569 ms 00:53:13.646 [2024-11-20 14:14:10.875376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.646 [2024-11-20 14:14:10.915316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.646 [2024-11-20 14:14:10.915379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:53:13.646 [2024-11-20 14:14:10.915395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.787 ms 00:53:13.646 [2024-11-20 14:14:10.915432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.646 [2024-11-20 14:14:10.915495] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:53:13.646 [2024-11-20 14:14:10.915523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:53:13.646 [2024-11-20 14:14:10.915563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:53:13.646 [2024-11-20 14:14:10.915576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.915993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:53:13.646 [2024-11-20 14:14:10.916305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:53:13.647 [2024-11-20 14:14:10.916811] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:53:13.647 [2024-11-20 14:14:10.916827] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bcf76a6-8843-4978-96b3-1dfc317e1622 00:53:13.647 [2024-11-20 14:14:10.916839] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:53:13.647 [2024-11-20 14:14:10.916850] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:53:13.647 [2024-11-20 14:14:10.916860] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:53:13.647 [2024-11-20 14:14:10.916872] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:53:13.647 [2024-11-20 14:14:10.916882] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:53:13.647 [2024-11-20 14:14:10.916893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:53:13.647 [2024-11-20 14:14:10.916916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:53:13.647 [2024-11-20 14:14:10.916926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:53:13.647 [2024-11-20 14:14:10.916936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:53:13.647 [2024-11-20 14:14:10.916948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.647 [2024-11-20 14:14:10.916959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:53:13.647 [2024-11-20 14:14:10.916971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:53:13.647 [2024-11-20 14:14:10.916988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.647 [2024-11-20 14:14:10.938692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.647 [2024-11-20 14:14:10.938746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:53:13.647 [2024-11-20 14:14:10.938761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.639 ms 00:53:13.647 [2024-11-20 14:14:10.938772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.647 [2024-11-20 14:14:10.939378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:13.647 [2024-11-20 14:14:10.939397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:53:13.647 [2024-11-20 14:14:10.939417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:53:13.647 [2024-11-20 14:14:10.939428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.906 [2024-11-20 14:14:10.993701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:13.906 [2024-11-20 14:14:10.993764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:53:13.906 [2024-11-20 14:14:10.993780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:13.906 [2024-11-20 14:14:10.993791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.906 [2024-11-20 14:14:10.993865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:13.906 [2024-11-20 14:14:10.993876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:53:13.906 [2024-11-20 14:14:10.993893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:13.906 [2024-11-20 14:14:10.993903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.906 [2024-11-20 14:14:10.993979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:13.906 [2024-11-20 14:14:10.993993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:53:13.906 [2024-11-20 14:14:10.994003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:13.906 [2024-11-20 14:14:10.994013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.906 [2024-11-20 14:14:10.994046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:13.906 [2024-11-20 14:14:10.994059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:53:13.906 [2024-11-20 14:14:10.994070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:13.906 [2024-11-20 14:14:10.994085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:13.906 [2024-11-20 14:14:11.127714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:13.906 [2024-11-20 14:14:11.127979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:53:13.906 [2024-11-20 14:14:11.128023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:13.906 [2024-11-20 14:14:11.128036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:14.163 [2024-11-20 14:14:11.243051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:14.163 [2024-11-20 14:14:11.243134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:53:14.163 [2024-11-20 14:14:11.243160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:14.163 [2024-11-20 14:14:11.243174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:14.163 [2024-11-20 14:14:11.243291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:14.163 [2024-11-20 14:14:11.243306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:53:14.163 [2024-11-20 14:14:11.243317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:14.163 [2024-11-20 14:14:11.243328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:14.163 [2024-11-20 14:14:11.243379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:14.164 [2024-11-20 14:14:11.243391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:53:14.164 [2024-11-20 14:14:11.243403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:14.164 [2024-11-20 14:14:11.243414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:14.164 [2024-11-20 14:14:11.243608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:14.164 [2024-11-20 14:14:11.243626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:53:14.164 [2024-11-20 14:14:11.243639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:14.164 [2024-11-20 14:14:11.243651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:14.164 [2024-11-20 14:14:11.243693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:14.164 [2024-11-20 14:14:11.243709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:53:14.164 [2024-11-20 14:14:11.243720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:14.164 [2024-11-20 14:14:11.243732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:14.164 [2024-11-20 14:14:11.243781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:14.164 [2024-11-20 14:14:11.243794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:53:14.164 [2024-11-20 14:14:11.243806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:14.164 [2024-11-20 14:14:11.243817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:14.164 [2024-11-20 14:14:11.243866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:14.164 [2024-11-20 14:14:11.243880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:53:14.164 [2024-11-20 14:14:11.243891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:14.164 [2024-11-20 14:14:11.243903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:14.164 [2024-11-20 14:14:11.244034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 665.341 ms, result 0 00:53:15.097 00:53:15.097 00:53:15.356 14:14:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:53:17.887 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:53:17.887 14:14:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:53:17.887 14:14:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:53:17.887 14:14:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:53:17.887 14:14:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:53:17.887 14:14:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:53:17.887 14:14:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:53:17.887 14:14:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:53:17.887 14:14:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81571 00:53:17.887 14:14:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81571 ']' 00:53:17.887 14:14:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81571 00:53:17.887 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81571) - No such process 00:53:17.887 14:14:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81571 is not found' 00:53:17.887 Process with pid 81571 is not found 00:53:17.887 14:14:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:53:18.146 14:14:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:53:18.146 14:14:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:53:18.146 Remove shared memory files 00:53:18.146 14:14:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:53:18.147 14:14:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:53:18.147 14:14:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:53:18.147 14:14:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:53:18.147 14:14:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:53:18.147 00:53:18.147 real 3m18.203s 00:53:18.147 user 3m44.059s 00:53:18.147 sys 0m41.648s 00:53:18.147 14:14:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:18.147 14:14:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:53:18.147 ************************************ 00:53:18.147 END TEST ftl_dirty_shutdown 00:53:18.147 ************************************ 00:53:18.147 14:14:15 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:53:18.147 14:14:15 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:53:18.147 14:14:15 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:18.147 14:14:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:53:18.147 ************************************ 00:53:18.147 START TEST ftl_upgrade_shutdown 00:53:18.147 ************************************ 00:53:18.147 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:53:18.147 * Looking for test storage... 00:53:18.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:53:18.147 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:53:18.147 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:53:18.147 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:53:18.405 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:53:18.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:18.406 --rc genhtml_branch_coverage=1 00:53:18.406 --rc genhtml_function_coverage=1 00:53:18.406 --rc genhtml_legend=1 00:53:18.406 --rc geninfo_all_blocks=1 00:53:18.406 --rc geninfo_unexecuted_blocks=1 00:53:18.406 00:53:18.406 ' 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:53:18.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:18.406 --rc genhtml_branch_coverage=1 00:53:18.406 --rc genhtml_function_coverage=1 00:53:18.406 --rc genhtml_legend=1 00:53:18.406 --rc geninfo_all_blocks=1 00:53:18.406 --rc geninfo_unexecuted_blocks=1 00:53:18.406 00:53:18.406 ' 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:53:18.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:18.406 --rc genhtml_branch_coverage=1 00:53:18.406 --rc genhtml_function_coverage=1 00:53:18.406 --rc genhtml_legend=1 00:53:18.406 --rc geninfo_all_blocks=1 00:53:18.406 --rc geninfo_unexecuted_blocks=1 00:53:18.406 00:53:18.406 ' 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:53:18.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:18.406 --rc genhtml_branch_coverage=1 00:53:18.406 --rc genhtml_function_coverage=1 00:53:18.406 --rc genhtml_legend=1 00:53:18.406 --rc geninfo_all_blocks=1 00:53:18.406 --rc geninfo_unexecuted_blocks=1 00:53:18.406 00:53:18.406 ' 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83658 00:53:18.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83658 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83658 ']' 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:18.406 14:14:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:53:18.406 [2024-11-20 14:14:15.709529] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:53:18.406 [2024-11-20 14:14:15.709807] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83658 ] 00:53:18.665 [2024-11-20 14:14:15.910579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:18.924 [2024-11-20 14:14:16.052405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:19.859 14:14:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:19.859 14:14:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:53:19.859 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:53:19.859 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:53:19.859 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:53:19.859 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:53:19.859 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:53:19.859 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:53:19.859 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:53:19.859 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:53:19.859 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:53:19.859 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:53:19.859 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:53:19.859 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:53:19.860 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:53:19.860 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:53:19.860 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:53:19.860 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:53:19.860 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:53:19.860 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:53:19.860 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:53:19.860 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:53:19.860 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:53:20.119 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:53:20.119 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:53:20.119 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:53:20.119 14:14:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:53:20.119 14:14:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:53:20.119 14:14:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:53:20.119 14:14:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:53:20.119 14:14:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:53:20.687 14:14:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:53:20.687 { 00:53:20.687 "name": "basen1", 00:53:20.687 "aliases": [ 00:53:20.687 "58a2f51a-3e09-47f5-a225-ccc54ffa2a70" 00:53:20.687 ], 00:53:20.687 "product_name": "NVMe disk", 00:53:20.687 "block_size": 4096, 00:53:20.687 "num_blocks": 1310720, 00:53:20.687 "uuid": "58a2f51a-3e09-47f5-a225-ccc54ffa2a70", 00:53:20.687 "numa_id": -1, 00:53:20.687 "assigned_rate_limits": { 00:53:20.687 "rw_ios_per_sec": 0, 00:53:20.687 "rw_mbytes_per_sec": 0, 00:53:20.687 "r_mbytes_per_sec": 0, 00:53:20.687 "w_mbytes_per_sec": 0 00:53:20.687 }, 00:53:20.687 "claimed": true, 00:53:20.687 "claim_type": "read_many_write_one", 00:53:20.687 "zoned": false, 00:53:20.687 "supported_io_types": { 00:53:20.687 "read": true, 00:53:20.687 "write": true, 00:53:20.687 "unmap": true, 00:53:20.687 "flush": true, 00:53:20.687 "reset": true, 00:53:20.687 "nvme_admin": true, 00:53:20.687 "nvme_io": true, 00:53:20.687 "nvme_io_md": false, 00:53:20.687 "write_zeroes": true, 00:53:20.687 "zcopy": false, 00:53:20.687 "get_zone_info": false, 00:53:20.687 "zone_management": false, 00:53:20.687 "zone_append": false, 00:53:20.687 "compare": true, 00:53:20.687 "compare_and_write": false, 00:53:20.687 "abort": true, 00:53:20.687 "seek_hole": false, 00:53:20.687 "seek_data": false, 00:53:20.687 "copy": true, 00:53:20.687 "nvme_iov_md": false 00:53:20.687 }, 00:53:20.687 "driver_specific": { 00:53:20.687 "nvme": [ 00:53:20.687 { 00:53:20.687 "pci_address": "0000:00:11.0", 00:53:20.687 "trid": { 00:53:20.687 "trtype": "PCIe", 00:53:20.687 "traddr": "0000:00:11.0" 00:53:20.687 }, 00:53:20.687 "ctrlr_data": { 00:53:20.687 "cntlid": 0, 00:53:20.687 "vendor_id": "0x1b36", 00:53:20.687 "model_number": "QEMU NVMe Ctrl", 00:53:20.687 "serial_number": "12341", 00:53:20.687 "firmware_revision": "8.0.0", 00:53:20.687 "subnqn": "nqn.2019-08.org.qemu:12341", 00:53:20.687 "oacs": { 00:53:20.687 "security": 0, 00:53:20.687 "format": 1, 00:53:20.687 "firmware": 0, 00:53:20.687 "ns_manage": 1 00:53:20.687 }, 00:53:20.687 "multi_ctrlr": false, 00:53:20.687 "ana_reporting": false 00:53:20.687 }, 00:53:20.687 "vs": { 00:53:20.687 "nvme_version": "1.4" 00:53:20.687 }, 00:53:20.687 "ns_data": { 00:53:20.687 "id": 1, 00:53:20.687 "can_share": false 00:53:20.687 } 00:53:20.687 } 00:53:20.687 ], 00:53:20.687 "mp_policy": "active_passive" 00:53:20.687 } 00:53:20.687 } 00:53:20.687 ]' 00:53:20.687 14:14:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:53:20.687 14:14:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:53:20.687 14:14:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:53:20.687 14:14:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:53:20.687 14:14:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:53:20.687 14:14:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:53:20.687 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:53:20.687 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:53:20.687 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:53:20.687 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:53:20.687 14:14:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:53:20.946 14:14:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=07c130f2-1dfc-49a9-a398-0e12c7879a7a 00:53:20.946 14:14:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:53:20.946 14:14:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 07c130f2-1dfc-49a9-a398-0e12c7879a7a 00:53:21.205 14:14:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:53:21.464 14:14:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=2f64fd16-cf5c-4ddf-9880-773b310174d2 00:53:21.464 14:14:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 2f64fd16-cf5c-4ddf-9880-773b310174d2 00:53:21.722 14:14:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=8224d221-e2ea-4d8d-a2b6-8d14709cb408 00:53:21.722 14:14:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 8224d221-e2ea-4d8d-a2b6-8d14709cb408 ]] 00:53:21.722 14:14:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 8224d221-e2ea-4d8d-a2b6-8d14709cb408 5120 00:53:21.722 14:14:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:53:21.722 14:14:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:53:21.722 14:14:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=8224d221-e2ea-4d8d-a2b6-8d14709cb408 00:53:21.722 14:14:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:53:21.722 14:14:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 8224d221-e2ea-4d8d-a2b6-8d14709cb408 00:53:21.722 14:14:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=8224d221-e2ea-4d8d-a2b6-8d14709cb408 00:53:21.722 14:14:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:53:21.722 14:14:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:53:21.722 14:14:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:53:21.722 14:14:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8224d221-e2ea-4d8d-a2b6-8d14709cb408 00:53:21.981 14:14:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:53:21.981 { 00:53:21.981 "name": "8224d221-e2ea-4d8d-a2b6-8d14709cb408", 00:53:21.981 "aliases": [ 00:53:21.981 "lvs/basen1p0" 00:53:21.981 ], 00:53:21.981 "product_name": "Logical Volume", 00:53:21.981 "block_size": 4096, 00:53:21.981 "num_blocks": 5242880, 00:53:21.981 "uuid": "8224d221-e2ea-4d8d-a2b6-8d14709cb408", 00:53:21.981 "assigned_rate_limits": { 00:53:21.981 "rw_ios_per_sec": 0, 00:53:21.981 "rw_mbytes_per_sec": 0, 00:53:21.981 "r_mbytes_per_sec": 0, 00:53:21.981 "w_mbytes_per_sec": 0 00:53:21.981 }, 00:53:21.981 "claimed": false, 00:53:21.981 "zoned": false, 00:53:21.981 "supported_io_types": { 00:53:21.981 "read": true, 00:53:21.981 "write": true, 00:53:21.981 "unmap": true, 00:53:21.981 "flush": false, 00:53:21.981 "reset": true, 00:53:21.981 "nvme_admin": false, 00:53:21.981 "nvme_io": false, 00:53:21.981 "nvme_io_md": false, 00:53:21.981 "write_zeroes": true, 00:53:21.981 "zcopy": false, 00:53:21.981 "get_zone_info": false, 00:53:21.981 "zone_management": false, 00:53:21.981 "zone_append": false, 00:53:21.981 "compare": false, 00:53:21.981 "compare_and_write": false, 00:53:21.981 "abort": false, 00:53:21.981 "seek_hole": true, 00:53:21.981 "seek_data": true, 00:53:21.981 "copy": false, 00:53:21.981 "nvme_iov_md": false 00:53:21.981 }, 00:53:21.981 "driver_specific": { 00:53:21.981 "lvol": { 00:53:21.981 "lvol_store_uuid": "2f64fd16-cf5c-4ddf-9880-773b310174d2", 00:53:21.981 "base_bdev": "basen1", 00:53:21.981 "thin_provision": true, 00:53:21.981 "num_allocated_clusters": 0, 00:53:21.981 "snapshot": false, 00:53:21.981 "clone": false, 00:53:21.981 "esnap_clone": false 00:53:21.981 } 00:53:21.981 } 00:53:21.981 } 00:53:21.981 ]' 00:53:21.981 14:14:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:53:21.981 14:14:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:53:21.981 14:14:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:53:22.239 14:14:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:53:22.239 14:14:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:53:22.239 14:14:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:53:22.239 14:14:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:53:22.239 14:14:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:53:22.239 14:14:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:53:22.498 14:14:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:53:22.498 14:14:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:53:22.498 14:14:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:53:22.757 14:14:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:53:22.757 14:14:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:53:22.757 14:14:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 8224d221-e2ea-4d8d-a2b6-8d14709cb408 -c cachen1p0 --l2p_dram_limit 2 00:53:23.016 [2024-11-20 14:14:20.169326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:23.016 [2024-11-20 14:14:20.169397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:53:23.016 [2024-11-20 14:14:20.169419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:53:23.016 [2024-11-20 14:14:20.169432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:23.016 [2024-11-20 14:14:20.169537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:23.016 [2024-11-20 14:14:20.169553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:53:23.016 [2024-11-20 14:14:20.169569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:53:23.016 [2024-11-20 14:14:20.169582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:23.016 [2024-11-20 14:14:20.169610] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:53:23.016 [2024-11-20 14:14:20.170730] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:53:23.016 [2024-11-20 14:14:20.170771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:23.016 [2024-11-20 14:14:20.170784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:53:23.016 [2024-11-20 14:14:20.170800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.162 ms 00:53:23.016 [2024-11-20 14:14:20.170812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:23.016 [2024-11-20 14:14:20.170999] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID c5110b1b-b875-4f10-bb21-e6aa2465c979 00:53:23.016 [2024-11-20 14:14:20.172587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:23.016 [2024-11-20 14:14:20.172634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:53:23.016 [2024-11-20 14:14:20.172650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:53:23.016 [2024-11-20 14:14:20.172664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:23.016 [2024-11-20 14:14:20.180511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:23.016 [2024-11-20 14:14:20.180570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:53:23.016 [2024-11-20 14:14:20.180587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.781 ms 00:53:23.016 [2024-11-20 14:14:20.180602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:23.016 [2024-11-20 14:14:20.180668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:23.016 [2024-11-20 14:14:20.180690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:53:23.016 [2024-11-20 14:14:20.180703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:53:23.016 [2024-11-20 14:14:20.180720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:23.016 [2024-11-20 14:14:20.180788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:23.016 [2024-11-20 14:14:20.180805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:53:23.016 [2024-11-20 14:14:20.180817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:53:23.016 [2024-11-20 14:14:20.180839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:23.016 [2024-11-20 14:14:20.180885] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:53:23.016 [2024-11-20 14:14:20.186632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:23.016 [2024-11-20 14:14:20.186688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:53:23.016 [2024-11-20 14:14:20.186707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.766 ms 00:53:23.016 [2024-11-20 14:14:20.186737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:23.016 [2024-11-20 14:14:20.186781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:23.016 [2024-11-20 14:14:20.186795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:53:23.016 [2024-11-20 14:14:20.186811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:53:23.016 [2024-11-20 14:14:20.186823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:23.016 [2024-11-20 14:14:20.186922] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:53:23.016 [2024-11-20 14:14:20.187086] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:53:23.016 [2024-11-20 14:14:20.187118] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:53:23.016 [2024-11-20 14:14:20.187135] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:53:23.016 [2024-11-20 14:14:20.187154] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:53:23.016 [2024-11-20 14:14:20.187169] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:53:23.016 [2024-11-20 14:14:20.187185] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:53:23.016 [2024-11-20 14:14:20.187197] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:53:23.016 [2024-11-20 14:14:20.187215] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:53:23.016 [2024-11-20 14:14:20.187227] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:53:23.016 [2024-11-20 14:14:20.187242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:23.016 [2024-11-20 14:14:20.187255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:53:23.016 [2024-11-20 14:14:20.187270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.323 ms 00:53:23.016 [2024-11-20 14:14:20.187282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:23.016 [2024-11-20 14:14:20.187376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:23.016 [2024-11-20 14:14:20.187395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:53:23.016 [2024-11-20 14:14:20.187412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:53:23.016 [2024-11-20 14:14:20.187437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:23.016 [2024-11-20 14:14:20.187584] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:53:23.016 [2024-11-20 14:14:20.187608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:53:23.016 [2024-11-20 14:14:20.187625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:53:23.016 [2024-11-20 14:14:20.187637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:23.016 [2024-11-20 14:14:20.187653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:53:23.016 [2024-11-20 14:14:20.187664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:53:23.016 [2024-11-20 14:14:20.187678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:53:23.016 [2024-11-20 14:14:20.187690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:53:23.016 [2024-11-20 14:14:20.187704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:53:23.017 [2024-11-20 14:14:20.187715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:23.017 [2024-11-20 14:14:20.187729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:53:23.017 [2024-11-20 14:14:20.187740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:53:23.017 [2024-11-20 14:14:20.187754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:23.017 [2024-11-20 14:14:20.187766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:53:23.017 [2024-11-20 14:14:20.187780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:53:23.017 [2024-11-20 14:14:20.187791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:23.017 [2024-11-20 14:14:20.187812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:53:23.017 [2024-11-20 14:14:20.187824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:53:23.017 [2024-11-20 14:14:20.187838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:23.017 [2024-11-20 14:14:20.187849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:53:23.017 [2024-11-20 14:14:20.187863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:53:23.017 [2024-11-20 14:14:20.187874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:23.017 [2024-11-20 14:14:20.187888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:53:23.017 [2024-11-20 14:14:20.187900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:53:23.017 [2024-11-20 14:14:20.187914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:23.017 [2024-11-20 14:14:20.187925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:53:23.017 [2024-11-20 14:14:20.187938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:53:23.017 [2024-11-20 14:14:20.187949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:23.017 [2024-11-20 14:14:20.187964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:53:23.017 [2024-11-20 14:14:20.187975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:53:23.017 [2024-11-20 14:14:20.187989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:23.017 [2024-11-20 14:14:20.188000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:53:23.017 [2024-11-20 14:14:20.188016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:53:23.017 [2024-11-20 14:14:20.188028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:23.017 [2024-11-20 14:14:20.188042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:53:23.017 [2024-11-20 14:14:20.188053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:53:23.017 [2024-11-20 14:14:20.188067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:23.017 [2024-11-20 14:14:20.188078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:53:23.017 [2024-11-20 14:14:20.188092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:53:23.017 [2024-11-20 14:14:20.188103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:23.017 [2024-11-20 14:14:20.188118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:53:23.017 [2024-11-20 14:14:20.188129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:53:23.017 [2024-11-20 14:14:20.188143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:23.017 [2024-11-20 14:14:20.188154] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:53:23.017 [2024-11-20 14:14:20.188171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:53:23.017 [2024-11-20 14:14:20.188184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:53:23.017 [2024-11-20 14:14:20.188199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:23.017 [2024-11-20 14:14:20.188211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:53:23.017 [2024-11-20 14:14:20.188229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:53:23.017 [2024-11-20 14:14:20.188240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:53:23.017 [2024-11-20 14:14:20.188255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:53:23.017 [2024-11-20 14:14:20.188265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:53:23.017 [2024-11-20 14:14:20.188280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:53:23.017 [2024-11-20 14:14:20.188297] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:53:23.017 [2024-11-20 14:14:20.188315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:53:23.017 [2024-11-20 14:14:20.188332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:53:23.017 [2024-11-20 14:14:20.188348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:53:23.017 [2024-11-20 14:14:20.188360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:53:23.017 [2024-11-20 14:14:20.188376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:53:23.017 [2024-11-20 14:14:20.188389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:53:23.017 [2024-11-20 14:14:20.188404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:53:23.017 [2024-11-20 14:14:20.188417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:53:23.017 [2024-11-20 14:14:20.188432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:53:23.017 [2024-11-20 14:14:20.188444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:53:23.017 [2024-11-20 14:14:20.188463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:53:23.017 [2024-11-20 14:14:20.188475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:53:23.017 [2024-11-20 14:14:20.188510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:53:23.017 [2024-11-20 14:14:20.188524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:53:23.017 [2024-11-20 14:14:20.188540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:53:23.017 [2024-11-20 14:14:20.188552] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:53:23.017 [2024-11-20 14:14:20.188569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:53:23.017 [2024-11-20 14:14:20.188582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:53:23.017 [2024-11-20 14:14:20.188605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:53:23.017 [2024-11-20 14:14:20.188618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:53:23.017 [2024-11-20 14:14:20.188633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:53:23.017 [2024-11-20 14:14:20.188647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:23.017 [2024-11-20 14:14:20.188662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:53:23.017 [2024-11-20 14:14:20.188676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.146 ms 00:53:23.017 [2024-11-20 14:14:20.188692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:23.017 [2024-11-20 14:14:20.188745] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:53:23.017 [2024-11-20 14:14:20.188768] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:53:25.547 [2024-11-20 14:14:22.782591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:25.547 [2024-11-20 14:14:22.782698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:53:25.547 [2024-11-20 14:14:22.782720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2593.829 ms 00:53:25.547 [2024-11-20 14:14:22.782736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:25.547 [2024-11-20 14:14:22.827669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:25.547 [2024-11-20 14:14:22.827763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:53:25.547 [2024-11-20 14:14:22.827784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.518 ms 00:53:25.547 [2024-11-20 14:14:22.827817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:25.547 [2024-11-20 14:14:22.827969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:25.547 [2024-11-20 14:14:22.827989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:53:25.547 [2024-11-20 14:14:22.828003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:53:25.547 [2024-11-20 14:14:22.828028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:25.806 [2024-11-20 14:14:22.883099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:25.806 [2024-11-20 14:14:22.883180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:53:25.806 [2024-11-20 14:14:22.883200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.995 ms 00:53:25.806 [2024-11-20 14:14:22.883217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:25.806 [2024-11-20 14:14:22.883279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:25.806 [2024-11-20 14:14:22.883302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:53:25.806 [2024-11-20 14:14:22.883315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:53:25.806 [2024-11-20 14:14:22.883329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:25.806 [2024-11-20 14:14:22.883917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:25.806 [2024-11-20 14:14:22.883953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:53:25.806 [2024-11-20 14:14:22.883968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.502 ms 00:53:25.806 [2024-11-20 14:14:22.883984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:25.806 [2024-11-20 14:14:22.884050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:25.806 [2024-11-20 14:14:22.884067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:53:25.806 [2024-11-20 14:14:22.884083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:53:25.806 [2024-11-20 14:14:22.884101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:25.806 [2024-11-20 14:14:22.908358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:25.807 [2024-11-20 14:14:22.908437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:53:25.807 [2024-11-20 14:14:22.908456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.229 ms 00:53:25.807 [2024-11-20 14:14:22.908473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:25.807 [2024-11-20 14:14:22.934753] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:53:25.807 [2024-11-20 14:14:22.936162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:25.807 [2024-11-20 14:14:22.936201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:53:25.807 [2024-11-20 14:14:22.936223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.527 ms 00:53:25.807 [2024-11-20 14:14:22.936236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:25.807 [2024-11-20 14:14:22.970806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:25.807 [2024-11-20 14:14:22.970909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:53:25.807 [2024-11-20 14:14:22.970934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.496 ms 00:53:25.807 [2024-11-20 14:14:22.970947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:25.807 [2024-11-20 14:14:22.971064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:25.807 [2024-11-20 14:14:22.971082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:53:25.807 [2024-11-20 14:14:22.971102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:53:25.807 [2024-11-20 14:14:22.971114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:25.807 [2024-11-20 14:14:23.017927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:25.807 [2024-11-20 14:14:23.018008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:53:25.807 [2024-11-20 14:14:23.018031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.709 ms 00:53:25.807 [2024-11-20 14:14:23.018045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:25.807 [2024-11-20 14:14:23.064410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:25.807 [2024-11-20 14:14:23.064504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:53:25.807 [2024-11-20 14:14:23.064528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.293 ms 00:53:25.807 [2024-11-20 14:14:23.064540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:25.807 [2024-11-20 14:14:23.065412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:25.807 [2024-11-20 14:14:23.065443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:53:25.807 [2024-11-20 14:14:23.065460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.814 ms 00:53:25.807 [2024-11-20 14:14:23.065476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.066 [2024-11-20 14:14:23.183995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.066 [2024-11-20 14:14:23.184097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:53:26.066 [2024-11-20 14:14:23.184128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 118.394 ms 00:53:26.066 [2024-11-20 14:14:23.184141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.066 [2024-11-20 14:14:23.232531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.066 [2024-11-20 14:14:23.232616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:53:26.066 [2024-11-20 14:14:23.232653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.204 ms 00:53:26.066 [2024-11-20 14:14:23.232666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.066 [2024-11-20 14:14:23.280047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.066 [2024-11-20 14:14:23.280134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:53:26.066 [2024-11-20 14:14:23.280159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.270 ms 00:53:26.066 [2024-11-20 14:14:23.280170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.066 [2024-11-20 14:14:23.327734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.066 [2024-11-20 14:14:23.327827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:53:26.066 [2024-11-20 14:14:23.327862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.442 ms 00:53:26.066 [2024-11-20 14:14:23.327875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.066 [2024-11-20 14:14:23.327973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.066 [2024-11-20 14:14:23.327988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:53:26.066 [2024-11-20 14:14:23.328008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:53:26.066 [2024-11-20 14:14:23.328020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.066 [2024-11-20 14:14:23.328193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:26.066 [2024-11-20 14:14:23.328209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:53:26.066 [2024-11-20 14:14:23.328229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:53:26.066 [2024-11-20 14:14:23.328241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:26.066 [2024-11-20 14:14:23.329558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3159.706 ms, result 0 00:53:26.066 { 00:53:26.066 "name": "ftl", 00:53:26.066 "uuid": "c5110b1b-b875-4f10-bb21-e6aa2465c979" 00:53:26.066 } 00:53:26.066 14:14:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:53:26.325 [2024-11-20 14:14:23.640624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:26.583 14:14:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:53:26.841 14:14:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:53:26.841 [2024-11-20 14:14:24.109170] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:53:26.841 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:53:27.099 [2024-11-20 14:14:24.336039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:53:27.099 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:53:27.665 Fill FTL, iteration 1 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83786 00:53:27.665 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:53:27.666 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:53:27.666 14:14:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83786 /var/tmp/spdk.tgt.sock 00:53:27.666 14:14:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83786 ']' 00:53:27.666 14:14:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:53:27.666 14:14:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:27.666 14:14:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:53:27.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:53:27.666 14:14:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:27.666 14:14:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:53:27.666 [2024-11-20 14:14:24.943811] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:53:27.666 [2024-11-20 14:14:24.943954] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83786 ] 00:53:27.924 [2024-11-20 14:14:25.134045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:28.183 [2024-11-20 14:14:25.313380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:29.120 14:14:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:29.120 14:14:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:53:29.120 14:14:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:53:29.720 ftln1 00:53:29.720 14:14:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:53:29.720 14:14:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:53:29.720 14:14:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:53:29.720 14:14:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83786 00:53:29.720 14:14:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83786 ']' 00:53:29.720 14:14:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83786 00:53:29.720 14:14:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:53:29.720 14:14:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:29.720 14:14:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83786 00:53:29.979 killing process with pid 83786 00:53:29.979 14:14:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:29.979 14:14:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:29.979 14:14:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83786' 00:53:29.979 14:14:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83786 00:53:29.979 14:14:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83786 00:53:32.519 14:14:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:53:32.519 14:14:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:53:32.777 [2024-11-20 14:14:29.903925] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:53:32.777 [2024-11-20 14:14:29.904078] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83852 ] 00:53:32.777 [2024-11-20 14:14:30.095489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:33.035 [2024-11-20 14:14:30.232024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:34.487  [2024-11-20T14:14:33.184Z] Copying: 214/1024 [MB] (214 MBps) [2024-11-20T14:14:34.116Z] Copying: 401/1024 [MB] (187 MBps) [2024-11-20T14:14:35.062Z] Copying: 578/1024 [MB] (177 MBps) [2024-11-20T14:14:35.997Z] Copying: 765/1024 [MB] (187 MBps) [2024-11-20T14:14:36.255Z] Copying: 961/1024 [MB] (196 MBps) [2024-11-20T14:14:37.631Z] Copying: 1024/1024 [MB] (average 191 MBps) 00:53:40.308 00:53:40.308 Calculate MD5 checksum, iteration 1 00:53:40.308 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:53:40.308 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:53:40.308 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:53:40.308 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:40.308 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:40.308 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:40.308 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:53:40.308 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:53:40.308 [2024-11-20 14:14:37.596413] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:53:40.308 [2024-11-20 14:14:37.596574] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83927 ] 00:53:40.566 [2024-11-20 14:14:37.776678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:40.823 [2024-11-20 14:14:37.967358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:42.195  [2024-11-20T14:14:40.893Z] Copying: 521/1024 [MB] (521 MBps) [2024-11-20T14:14:40.893Z] Copying: 1023/1024 [MB] (502 MBps) [2024-11-20T14:14:41.827Z] Copying: 1024/1024 [MB] (average 511 MBps) 00:53:44.504 00:53:44.504 14:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:53:44.504 14:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:53:47.029 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:53:47.029 Fill FTL, iteration 2 00:53:47.029 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6ecf9918a4f4f5cf84e76b91294d968e 00:53:47.029 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:53:47.029 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:53:47.029 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:53:47.029 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:53:47.029 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:47.029 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:47.029 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:47.029 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:53:47.029 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:53:47.029 [2024-11-20 14:14:43.906561] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:53:47.029 [2024-11-20 14:14:43.906723] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83996 ] 00:53:47.029 [2024-11-20 14:14:44.085446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:47.029 [2024-11-20 14:14:44.237067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:48.924  [2024-11-20T14:14:46.812Z] Copying: 217/1024 [MB] (217 MBps) [2024-11-20T14:14:48.187Z] Copying: 425/1024 [MB] (208 MBps) [2024-11-20T14:14:49.121Z] Copying: 645/1024 [MB] (220 MBps) [2024-11-20T14:14:49.687Z] Copying: 865/1024 [MB] (220 MBps) [2024-11-20T14:14:51.061Z] Copying: 1024/1024 [MB] (average 217 MBps) 00:53:53.738 00:53:53.738 Calculate MD5 checksum, iteration 2 00:53:53.738 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:53:53.738 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:53:53.738 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:53:53.738 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:53.738 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:53.738 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:53.738 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:53:53.738 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:53:53.738 [2024-11-20 14:14:50.947641] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:53:53.738 [2024-11-20 14:14:50.947822] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84067 ] 00:53:53.996 [2024-11-20 14:14:51.146914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:53.996 [2024-11-20 14:14:51.296874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:55.896  [2024-11-20T14:14:54.154Z] Copying: 569/1024 [MB] (569 MBps) [2024-11-20T14:14:55.666Z] Copying: 1024/1024 [MB] (average 538 MBps) 00:53:58.343 00:53:58.343 14:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:53:58.343 14:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:54:00.874 14:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:54:00.874 14:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=040f3c24af0bfa6674b0b689371b47cb 00:54:00.874 14:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:54:00.874 14:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:54:00.874 14:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:54:00.874 [2024-11-20 14:14:58.064456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:00.874 [2024-11-20 14:14:58.064533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:54:00.874 [2024-11-20 14:14:58.064552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:54:00.874 [2024-11-20 14:14:58.064566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:00.874 [2024-11-20 14:14:58.064601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:00.874 [2024-11-20 14:14:58.064614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:54:00.874 [2024-11-20 14:14:58.064632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:54:00.874 [2024-11-20 14:14:58.064644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:00.874 [2024-11-20 14:14:58.064669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:00.874 [2024-11-20 14:14:58.064682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:54:00.874 [2024-11-20 14:14:58.064694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:54:00.874 [2024-11-20 14:14:58.064706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:00.874 [2024-11-20 14:14:58.064784] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.324 ms, result 0 00:54:00.874 true 00:54:00.874 14:14:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:54:01.133 { 00:54:01.133 "name": "ftl", 00:54:01.133 "properties": [ 00:54:01.133 { 00:54:01.133 "name": "superblock_version", 00:54:01.133 "value": 5, 00:54:01.133 "read-only": true 00:54:01.133 }, 00:54:01.133 { 00:54:01.133 "name": "base_device", 00:54:01.133 "bands": [ 00:54:01.133 { 00:54:01.133 "id": 0, 00:54:01.133 "state": "FREE", 00:54:01.133 "validity": 0.0 00:54:01.133 }, 00:54:01.133 { 00:54:01.133 "id": 1, 00:54:01.133 "state": "FREE", 00:54:01.133 "validity": 0.0 00:54:01.133 }, 00:54:01.133 { 00:54:01.133 "id": 2, 00:54:01.133 "state": "FREE", 00:54:01.133 "validity": 0.0 00:54:01.133 }, 00:54:01.133 { 00:54:01.133 "id": 3, 00:54:01.133 "state": "FREE", 00:54:01.133 "validity": 0.0 00:54:01.133 }, 00:54:01.133 { 00:54:01.133 "id": 4, 00:54:01.133 "state": "FREE", 00:54:01.133 "validity": 0.0 00:54:01.133 }, 00:54:01.133 { 00:54:01.133 "id": 5, 00:54:01.133 "state": "FREE", 00:54:01.133 "validity": 0.0 00:54:01.133 }, 00:54:01.133 { 00:54:01.133 "id": 6, 00:54:01.133 "state": "FREE", 00:54:01.133 "validity": 0.0 00:54:01.133 }, 00:54:01.133 { 00:54:01.133 "id": 7, 00:54:01.133 "state": "FREE", 00:54:01.133 "validity": 0.0 00:54:01.133 }, 00:54:01.133 { 00:54:01.133 "id": 8, 00:54:01.133 "state": "FREE", 00:54:01.133 "validity": 0.0 00:54:01.133 }, 00:54:01.133 { 00:54:01.133 "id": 9, 00:54:01.133 "state": "FREE", 00:54:01.133 "validity": 0.0 00:54:01.133 }, 00:54:01.133 { 00:54:01.133 "id": 10, 00:54:01.133 "state": "FREE", 00:54:01.133 "validity": 0.0 00:54:01.133 }, 00:54:01.133 { 00:54:01.133 "id": 11, 00:54:01.133 "state": "FREE", 00:54:01.133 "validity": 0.0 00:54:01.133 }, 00:54:01.133 { 00:54:01.133 "id": 12, 00:54:01.134 "state": "FREE", 00:54:01.134 "validity": 0.0 00:54:01.134 }, 00:54:01.134 { 00:54:01.134 "id": 13, 00:54:01.134 "state": "FREE", 00:54:01.134 "validity": 0.0 00:54:01.134 }, 00:54:01.134 { 00:54:01.134 "id": 14, 00:54:01.134 "state": "FREE", 00:54:01.134 "validity": 0.0 00:54:01.134 }, 00:54:01.134 { 00:54:01.134 "id": 15, 00:54:01.134 "state": "FREE", 00:54:01.134 "validity": 0.0 00:54:01.134 }, 00:54:01.134 { 00:54:01.134 "id": 16, 00:54:01.134 "state": "FREE", 00:54:01.134 "validity": 0.0 00:54:01.134 }, 00:54:01.134 { 00:54:01.134 "id": 17, 00:54:01.134 "state": "FREE", 00:54:01.134 "validity": 0.0 00:54:01.134 } 00:54:01.134 ], 00:54:01.134 "read-only": true 00:54:01.134 }, 00:54:01.134 { 00:54:01.134 "name": "cache_device", 00:54:01.134 "type": "bdev", 00:54:01.134 "chunks": [ 00:54:01.134 { 00:54:01.134 "id": 0, 00:54:01.134 "state": "INACTIVE", 00:54:01.134 "utilization": 0.0 00:54:01.134 }, 00:54:01.134 { 00:54:01.134 "id": 1, 00:54:01.134 "state": "CLOSED", 00:54:01.134 "utilization": 1.0 00:54:01.134 }, 00:54:01.134 { 00:54:01.134 "id": 2, 00:54:01.134 "state": "CLOSED", 00:54:01.134 "utilization": 1.0 00:54:01.134 }, 00:54:01.134 { 00:54:01.134 "id": 3, 00:54:01.134 "state": "OPEN", 00:54:01.134 "utilization": 0.001953125 00:54:01.134 }, 00:54:01.134 { 00:54:01.134 "id": 4, 00:54:01.134 "state": "OPEN", 00:54:01.134 "utilization": 0.0 00:54:01.134 } 00:54:01.134 ], 00:54:01.134 "read-only": true 00:54:01.134 }, 00:54:01.134 { 00:54:01.134 "name": "verbose_mode", 00:54:01.134 "value": true, 00:54:01.134 "unit": "", 00:54:01.134 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:54:01.134 }, 00:54:01.134 { 00:54:01.134 "name": "prep_upgrade_on_shutdown", 00:54:01.134 "value": false, 00:54:01.134 "unit": "", 00:54:01.134 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:54:01.134 } 00:54:01.134 ] 00:54:01.134 } 00:54:01.134 14:14:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:54:01.394 [2024-11-20 14:14:58.549023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:01.394 [2024-11-20 14:14:58.549087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:54:01.394 [2024-11-20 14:14:58.549105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:54:01.394 [2024-11-20 14:14:58.549118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:01.394 [2024-11-20 14:14:58.549149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:01.394 [2024-11-20 14:14:58.549162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:54:01.394 [2024-11-20 14:14:58.549175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:54:01.394 [2024-11-20 14:14:58.549186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:01.394 [2024-11-20 14:14:58.549210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:01.394 [2024-11-20 14:14:58.549222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:54:01.394 [2024-11-20 14:14:58.549234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:54:01.394 [2024-11-20 14:14:58.549245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:01.394 [2024-11-20 14:14:58.549353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.319 ms, result 0 00:54:01.394 true 00:54:01.394 14:14:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:54:01.394 14:14:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:54:01.394 14:14:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:54:01.654 14:14:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:54:01.654 14:14:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:54:01.654 14:14:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:54:01.913 [2024-11-20 14:14:59.101661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:01.913 [2024-11-20 14:14:59.101729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:54:01.913 [2024-11-20 14:14:59.101748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:54:01.913 [2024-11-20 14:14:59.101762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:01.913 [2024-11-20 14:14:59.101796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:01.913 [2024-11-20 14:14:59.101809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:54:01.913 [2024-11-20 14:14:59.101822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:54:01.913 [2024-11-20 14:14:59.101833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:01.913 [2024-11-20 14:14:59.101857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:01.913 [2024-11-20 14:14:59.101870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:54:01.913 [2024-11-20 14:14:59.101883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:54:01.913 [2024-11-20 14:14:59.101894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:01.913 [2024-11-20 14:14:59.101962] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.294 ms, result 0 00:54:01.913 true 00:54:01.913 14:14:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:54:02.171 { 00:54:02.171 "name": "ftl", 00:54:02.171 "properties": [ 00:54:02.171 { 00:54:02.171 "name": "superblock_version", 00:54:02.171 "value": 5, 00:54:02.171 "read-only": true 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "name": "base_device", 00:54:02.171 "bands": [ 00:54:02.171 { 00:54:02.171 "id": 0, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 1, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 2, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 3, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 4, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 5, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 6, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 7, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 8, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 9, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 10, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 11, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 12, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 13, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 14, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 15, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 16, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 17, 00:54:02.171 "state": "FREE", 00:54:02.171 "validity": 0.0 00:54:02.171 } 00:54:02.171 ], 00:54:02.171 "read-only": true 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "name": "cache_device", 00:54:02.171 "type": "bdev", 00:54:02.171 "chunks": [ 00:54:02.171 { 00:54:02.171 "id": 0, 00:54:02.171 "state": "INACTIVE", 00:54:02.171 "utilization": 0.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 1, 00:54:02.171 "state": "CLOSED", 00:54:02.171 "utilization": 1.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 2, 00:54:02.171 "state": "CLOSED", 00:54:02.171 "utilization": 1.0 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 3, 00:54:02.171 "state": "OPEN", 00:54:02.171 "utilization": 0.001953125 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "id": 4, 00:54:02.171 "state": "OPEN", 00:54:02.171 "utilization": 0.0 00:54:02.171 } 00:54:02.171 ], 00:54:02.171 "read-only": true 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "name": "verbose_mode", 00:54:02.171 "value": true, 00:54:02.171 "unit": "", 00:54:02.171 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:54:02.171 }, 00:54:02.171 { 00:54:02.171 "name": "prep_upgrade_on_shutdown", 00:54:02.171 "value": true, 00:54:02.171 "unit": "", 00:54:02.171 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:54:02.171 } 00:54:02.171 ] 00:54:02.171 } 00:54:02.171 14:14:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:54:02.171 14:14:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83658 ]] 00:54:02.171 14:14:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83658 00:54:02.171 14:14:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83658 ']' 00:54:02.171 14:14:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83658 00:54:02.171 14:14:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:54:02.171 14:14:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:02.171 14:14:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83658 00:54:02.428 killing process with pid 83658 00:54:02.428 14:14:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:02.428 14:14:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:02.428 14:14:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83658' 00:54:02.428 14:14:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83658 00:54:02.428 14:14:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83658 00:54:03.802 [2024-11-20 14:15:00.858812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:54:03.802 [2024-11-20 14:15:00.883110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:03.802 [2024-11-20 14:15:00.883189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:54:03.802 [2024-11-20 14:15:00.883212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:54:03.802 [2024-11-20 14:15:00.883225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:03.802 [2024-11-20 14:15:00.883254] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:54:03.802 [2024-11-20 14:15:00.888209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:03.802 [2024-11-20 14:15:00.888261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:54:03.802 [2024-11-20 14:15:00.888280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.932 ms 00:54:03.802 [2024-11-20 14:15:00.888293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.913 [2024-11-20 14:15:08.972249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:11.913 [2024-11-20 14:15:08.972348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:54:11.913 [2024-11-20 14:15:08.972377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8083.856 ms 00:54:11.913 [2024-11-20 14:15:08.972407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.913 [2024-11-20 14:15:08.973657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:11.913 [2024-11-20 14:15:08.973705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:54:11.913 [2024-11-20 14:15:08.973722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.214 ms 00:54:11.913 [2024-11-20 14:15:08.973735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.913 [2024-11-20 14:15:08.974903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:11.913 [2024-11-20 14:15:08.974938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:54:11.913 [2024-11-20 14:15:08.974968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.128 ms 00:54:11.913 [2024-11-20 14:15:08.974989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.913 [2024-11-20 14:15:08.993862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:11.913 [2024-11-20 14:15:08.993943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:54:11.913 [2024-11-20 14:15:08.993963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.818 ms 00:54:11.913 [2024-11-20 14:15:08.993977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.913 [2024-11-20 14:15:09.005318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:11.913 [2024-11-20 14:15:09.005395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:54:11.913 [2024-11-20 14:15:09.005416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.268 ms 00:54:11.913 [2024-11-20 14:15:09.005430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.913 [2024-11-20 14:15:09.005575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:11.913 [2024-11-20 14:15:09.005593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:54:11.913 [2024-11-20 14:15:09.005618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:54:11.913 [2024-11-20 14:15:09.005631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.913 [2024-11-20 14:15:09.023933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:11.914 [2024-11-20 14:15:09.024011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:54:11.914 [2024-11-20 14:15:09.024031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.274 ms 00:54:11.914 [2024-11-20 14:15:09.024044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.914 [2024-11-20 14:15:09.042174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:11.914 [2024-11-20 14:15:09.042250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:54:11.914 [2024-11-20 14:15:09.042268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.059 ms 00:54:11.914 [2024-11-20 14:15:09.042280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.914 [2024-11-20 14:15:09.060297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:11.914 [2024-11-20 14:15:09.060376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:54:11.914 [2024-11-20 14:15:09.060394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.952 ms 00:54:11.914 [2024-11-20 14:15:09.060407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.914 [2024-11-20 14:15:09.078662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:11.914 [2024-11-20 14:15:09.078745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:54:11.914 [2024-11-20 14:15:09.078765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.092 ms 00:54:11.914 [2024-11-20 14:15:09.078777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.914 [2024-11-20 14:15:09.078846] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:54:11.914 [2024-11-20 14:15:09.078869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:54:11.914 [2024-11-20 14:15:09.078884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:54:11.914 [2024-11-20 14:15:09.078920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:54:11.914 [2024-11-20 14:15:09.078933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.078946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.078959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.078972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.078984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.078997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.079009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.079038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.079051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.079064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.079077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.079090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.079102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.079115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.079127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:54:11.914 [2024-11-20 14:15:09.079143] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:54:11.914 [2024-11-20 14:15:09.079155] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c5110b1b-b875-4f10-bb21-e6aa2465c979 00:54:11.914 [2024-11-20 14:15:09.079177] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:54:11.914 [2024-11-20 14:15:09.079190] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:54:11.914 [2024-11-20 14:15:09.079203] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:54:11.914 [2024-11-20 14:15:09.079216] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:54:11.914 [2024-11-20 14:15:09.079228] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:54:11.914 [2024-11-20 14:15:09.079246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:54:11.914 [2024-11-20 14:15:09.079259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:54:11.914 [2024-11-20 14:15:09.079270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:54:11.914 [2024-11-20 14:15:09.079281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:54:11.914 [2024-11-20 14:15:09.079293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:11.914 [2024-11-20 14:15:09.079310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:54:11.914 [2024-11-20 14:15:09.079323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.449 ms 00:54:11.914 [2024-11-20 14:15:09.079335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.914 [2024-11-20 14:15:09.104528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:11.914 [2024-11-20 14:15:09.104611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:54:11.914 [2024-11-20 14:15:09.104631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.134 ms 00:54:11.914 [2024-11-20 14:15:09.104656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.914 [2024-11-20 14:15:09.105298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:11.914 [2024-11-20 14:15:09.105326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:54:11.914 [2024-11-20 14:15:09.105340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.592 ms 00:54:11.914 [2024-11-20 14:15:09.105353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.914 [2024-11-20 14:15:09.186869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:11.914 [2024-11-20 14:15:09.186947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:54:11.914 [2024-11-20 14:15:09.186972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:11.914 [2024-11-20 14:15:09.186985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.914 [2024-11-20 14:15:09.187044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:11.914 [2024-11-20 14:15:09.187057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:54:11.914 [2024-11-20 14:15:09.187069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:11.914 [2024-11-20 14:15:09.187081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.914 [2024-11-20 14:15:09.187225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:11.914 [2024-11-20 14:15:09.187243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:54:11.914 [2024-11-20 14:15:09.187255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:11.914 [2024-11-20 14:15:09.187290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:11.914 [2024-11-20 14:15:09.187313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:11.914 [2024-11-20 14:15:09.187327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:54:11.914 [2024-11-20 14:15:09.187339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:11.914 [2024-11-20 14:15:09.187361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:12.173 [2024-11-20 14:15:09.336511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:12.173 [2024-11-20 14:15:09.336585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:54:12.173 [2024-11-20 14:15:09.336614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:12.173 [2024-11-20 14:15:09.336627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:12.173 [2024-11-20 14:15:09.462206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:12.173 [2024-11-20 14:15:09.462279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:54:12.173 [2024-11-20 14:15:09.462299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:12.173 [2024-11-20 14:15:09.462312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:12.173 [2024-11-20 14:15:09.462451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:12.173 [2024-11-20 14:15:09.462468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:54:12.173 [2024-11-20 14:15:09.462506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:12.173 [2024-11-20 14:15:09.462519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:12.173 [2024-11-20 14:15:09.462590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:12.173 [2024-11-20 14:15:09.462605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:54:12.173 [2024-11-20 14:15:09.462618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:12.173 [2024-11-20 14:15:09.462630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:12.173 [2024-11-20 14:15:09.462777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:12.173 [2024-11-20 14:15:09.462810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:54:12.173 [2024-11-20 14:15:09.462823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:12.173 [2024-11-20 14:15:09.462836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:12.173 [2024-11-20 14:15:09.462886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:12.173 [2024-11-20 14:15:09.462910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:54:12.173 [2024-11-20 14:15:09.462923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:12.173 [2024-11-20 14:15:09.462935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:12.173 [2024-11-20 14:15:09.462982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:12.173 [2024-11-20 14:15:09.462997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:54:12.173 [2024-11-20 14:15:09.463010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:12.173 [2024-11-20 14:15:09.463021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:12.173 [2024-11-20 14:15:09.463076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:12.173 [2024-11-20 14:15:09.463091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:54:12.173 [2024-11-20 14:15:09.463104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:12.173 [2024-11-20 14:15:09.463117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:12.173 [2024-11-20 14:15:09.463270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8580.071 ms, result 0 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84295 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84295 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84295 ']' 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:14.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:14.702 14:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:54:14.964 [2024-11-20 14:15:12.116463] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:54:14.964 [2024-11-20 14:15:12.116660] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84295 ] 00:54:15.222 [2024-11-20 14:15:12.297277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:15.222 [2024-11-20 14:15:12.436916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:16.597 [2024-11-20 14:15:13.551247] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:54:16.597 [2024-11-20 14:15:13.551350] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:54:16.597 [2024-11-20 14:15:13.704379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:16.597 [2024-11-20 14:15:13.704470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:54:16.597 [2024-11-20 14:15:13.704510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:54:16.597 [2024-11-20 14:15:13.704527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:16.597 [2024-11-20 14:15:13.704667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:16.597 [2024-11-20 14:15:13.704705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:54:16.597 [2024-11-20 14:15:13.704733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.097 ms 00:54:16.597 [2024-11-20 14:15:13.704758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:16.597 [2024-11-20 14:15:13.704810] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:54:16.597 [2024-11-20 14:15:13.706116] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:54:16.597 [2024-11-20 14:15:13.706178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:16.597 [2024-11-20 14:15:13.706194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:54:16.597 [2024-11-20 14:15:13.706211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.377 ms 00:54:16.597 [2024-11-20 14:15:13.706227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:16.597 [2024-11-20 14:15:13.708136] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:54:16.597 [2024-11-20 14:15:13.732605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:16.597 [2024-11-20 14:15:13.732747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:54:16.597 [2024-11-20 14:15:13.732789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.465 ms 00:54:16.597 [2024-11-20 14:15:13.732805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:16.597 [2024-11-20 14:15:13.732983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:16.597 [2024-11-20 14:15:13.733003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:54:16.597 [2024-11-20 14:15:13.733019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:54:16.597 [2024-11-20 14:15:13.733033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:16.597 [2024-11-20 14:15:13.741262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:16.597 [2024-11-20 14:15:13.741337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:54:16.597 [2024-11-20 14:15:13.741357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.061 ms 00:54:16.597 [2024-11-20 14:15:13.741372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:16.597 [2024-11-20 14:15:13.741509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:16.597 [2024-11-20 14:15:13.741531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:54:16.597 [2024-11-20 14:15:13.741547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.093 ms 00:54:16.597 [2024-11-20 14:15:13.741561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:16.597 [2024-11-20 14:15:13.741646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:16.597 [2024-11-20 14:15:13.741664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:54:16.597 [2024-11-20 14:15:13.741684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:54:16.597 [2024-11-20 14:15:13.741698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:16.597 [2024-11-20 14:15:13.741738] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:54:16.597 [2024-11-20 14:15:13.747612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:16.597 [2024-11-20 14:15:13.747700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:54:16.597 [2024-11-20 14:15:13.747720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.881 ms 00:54:16.597 [2024-11-20 14:15:13.747742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:16.597 [2024-11-20 14:15:13.747795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:16.597 [2024-11-20 14:15:13.747811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:54:16.597 [2024-11-20 14:15:13.747827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:54:16.597 [2024-11-20 14:15:13.747842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:16.597 [2024-11-20 14:15:13.747951] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:54:16.597 [2024-11-20 14:15:13.747993] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:54:16.597 [2024-11-20 14:15:13.748045] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:54:16.597 [2024-11-20 14:15:13.748070] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:54:16.597 [2024-11-20 14:15:13.748184] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:54:16.597 [2024-11-20 14:15:13.748203] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:54:16.597 [2024-11-20 14:15:13.748222] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:54:16.597 [2024-11-20 14:15:13.748240] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:54:16.597 [2024-11-20 14:15:13.748258] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:54:16.597 [2024-11-20 14:15:13.748280] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:54:16.597 [2024-11-20 14:15:13.748294] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:54:16.597 [2024-11-20 14:15:13.748308] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:54:16.597 [2024-11-20 14:15:13.748323] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:54:16.597 [2024-11-20 14:15:13.748338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:16.597 [2024-11-20 14:15:13.748352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:54:16.597 [2024-11-20 14:15:13.748367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.392 ms 00:54:16.597 [2024-11-20 14:15:13.748381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:16.598 [2024-11-20 14:15:13.748482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:16.598 [2024-11-20 14:15:13.748513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:54:16.598 [2024-11-20 14:15:13.748530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:54:16.598 [2024-11-20 14:15:13.748549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:16.598 [2024-11-20 14:15:13.748670] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:54:16.598 [2024-11-20 14:15:13.748696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:54:16.598 [2024-11-20 14:15:13.748712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:54:16.598 [2024-11-20 14:15:13.748728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:16.598 [2024-11-20 14:15:13.748743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:54:16.598 [2024-11-20 14:15:13.748757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:54:16.598 [2024-11-20 14:15:13.748771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:54:16.598 [2024-11-20 14:15:13.748784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:54:16.598 [2024-11-20 14:15:13.748798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:54:16.598 [2024-11-20 14:15:13.748811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:16.598 [2024-11-20 14:15:13.748825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:54:16.598 [2024-11-20 14:15:13.748838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:54:16.598 [2024-11-20 14:15:13.748851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:16.598 [2024-11-20 14:15:13.748865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:54:16.598 [2024-11-20 14:15:13.748879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:54:16.598 [2024-11-20 14:15:13.748892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:16.598 [2024-11-20 14:15:13.748905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:54:16.598 [2024-11-20 14:15:13.748919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:54:16.598 [2024-11-20 14:15:13.748932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:16.598 [2024-11-20 14:15:13.748945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:54:16.598 [2024-11-20 14:15:13.748959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:54:16.598 [2024-11-20 14:15:13.748972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:54:16.598 [2024-11-20 14:15:13.748985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:54:16.598 [2024-11-20 14:15:13.748998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:54:16.598 [2024-11-20 14:15:13.749012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:54:16.598 [2024-11-20 14:15:13.749044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:54:16.598 [2024-11-20 14:15:13.749058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:54:16.598 [2024-11-20 14:15:13.749071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:54:16.598 [2024-11-20 14:15:13.749085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:54:16.598 [2024-11-20 14:15:13.749098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:54:16.598 [2024-11-20 14:15:13.749112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:54:16.598 [2024-11-20 14:15:13.749126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:54:16.598 [2024-11-20 14:15:13.749140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:54:16.598 [2024-11-20 14:15:13.749153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:16.598 [2024-11-20 14:15:13.749168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:54:16.598 [2024-11-20 14:15:13.749182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:54:16.598 [2024-11-20 14:15:13.749196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:16.598 [2024-11-20 14:15:13.749212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:54:16.598 [2024-11-20 14:15:13.749226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:54:16.598 [2024-11-20 14:15:13.749239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:16.598 [2024-11-20 14:15:13.749253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:54:16.598 [2024-11-20 14:15:13.749267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:54:16.598 [2024-11-20 14:15:13.749280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:16.598 [2024-11-20 14:15:13.749293] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:54:16.598 [2024-11-20 14:15:13.749308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:54:16.598 [2024-11-20 14:15:13.749322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:54:16.598 [2024-11-20 14:15:13.749336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:16.598 [2024-11-20 14:15:13.749358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:54:16.598 [2024-11-20 14:15:13.749373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:54:16.598 [2024-11-20 14:15:13.749386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:54:16.598 [2024-11-20 14:15:13.749400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:54:16.598 [2024-11-20 14:15:13.749414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:54:16.598 [2024-11-20 14:15:13.749427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:54:16.598 [2024-11-20 14:15:13.749442] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:54:16.598 [2024-11-20 14:15:13.749460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:54:16.598 [2024-11-20 14:15:13.749476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:54:16.598 [2024-11-20 14:15:13.749512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:54:16.598 [2024-11-20 14:15:13.749527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:54:16.598 [2024-11-20 14:15:13.749542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:54:16.598 [2024-11-20 14:15:13.749557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:54:16.598 [2024-11-20 14:15:13.749572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:54:16.598 [2024-11-20 14:15:13.749587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:54:16.598 [2024-11-20 14:15:13.749602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:54:16.598 [2024-11-20 14:15:13.749618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:54:16.598 [2024-11-20 14:15:13.749633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:54:16.598 [2024-11-20 14:15:13.749647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:54:16.598 [2024-11-20 14:15:13.749662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:54:16.598 [2024-11-20 14:15:13.749676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:54:16.598 [2024-11-20 14:15:13.749691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:54:16.598 [2024-11-20 14:15:13.749707] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:54:16.598 [2024-11-20 14:15:13.749728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:54:16.598 [2024-11-20 14:15:13.749744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:54:16.598 [2024-11-20 14:15:13.749760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:54:16.598 [2024-11-20 14:15:13.749774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:54:16.598 [2024-11-20 14:15:13.749789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:54:16.598 [2024-11-20 14:15:13.749806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:16.598 [2024-11-20 14:15:13.749820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:54:16.598 [2024-11-20 14:15:13.749836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.204 ms 00:54:16.598 [2024-11-20 14:15:13.749850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:16.598 [2024-11-20 14:15:13.749917] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:54:16.598 [2024-11-20 14:15:13.749936] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:54:20.785 [2024-11-20 14:15:17.994567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:20.785 [2024-11-20 14:15:17.994663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:54:20.785 [2024-11-20 14:15:17.994684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4244.624 ms 00:54:20.785 [2024-11-20 14:15:17.994698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:20.785 [2024-11-20 14:15:18.032452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:20.785 [2024-11-20 14:15:18.032531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:54:20.785 [2024-11-20 14:15:18.032550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.423 ms 00:54:20.785 [2024-11-20 14:15:18.032563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:20.785 [2024-11-20 14:15:18.032703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:20.785 [2024-11-20 14:15:18.032726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:54:20.785 [2024-11-20 14:15:18.032740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:54:20.785 [2024-11-20 14:15:18.032752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:20.785 [2024-11-20 14:15:18.079151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:20.785 [2024-11-20 14:15:18.079217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:54:20.785 [2024-11-20 14:15:18.079235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.321 ms 00:54:20.785 [2024-11-20 14:15:18.079254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:20.785 [2024-11-20 14:15:18.079321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:20.785 [2024-11-20 14:15:18.079335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:54:20.785 [2024-11-20 14:15:18.079349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:54:20.785 [2024-11-20 14:15:18.079378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:20.785 [2024-11-20 14:15:18.079925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:20.785 [2024-11-20 14:15:18.079953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:54:20.785 [2024-11-20 14:15:18.079968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.462 ms 00:54:20.785 [2024-11-20 14:15:18.079982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:20.785 [2024-11-20 14:15:18.080040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:20.785 [2024-11-20 14:15:18.080055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:54:20.785 [2024-11-20 14:15:18.080069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:54:20.785 [2024-11-20 14:15:18.080082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:20.785 [2024-11-20 14:15:18.102821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:20.785 [2024-11-20 14:15:18.102882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:54:20.785 [2024-11-20 14:15:18.102899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.706 ms 00:54:20.785 [2024-11-20 14:15:18.102912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.044 [2024-11-20 14:15:18.140816] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:54:21.044 [2024-11-20 14:15:18.140891] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:54:21.044 [2024-11-20 14:15:18.140913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.044 [2024-11-20 14:15:18.140935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:54:21.044 [2024-11-20 14:15:18.140951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.800 ms 00:54:21.044 [2024-11-20 14:15:18.140964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.044 [2024-11-20 14:15:18.164214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.044 [2024-11-20 14:15:18.164288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:54:21.044 [2024-11-20 14:15:18.164307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.129 ms 00:54:21.044 [2024-11-20 14:15:18.164339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.044 [2024-11-20 14:15:18.184973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.044 [2024-11-20 14:15:18.185041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:54:21.044 [2024-11-20 14:15:18.185059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.531 ms 00:54:21.044 [2024-11-20 14:15:18.185072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.044 [2024-11-20 14:15:18.204364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.044 [2024-11-20 14:15:18.204426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:54:21.044 [2024-11-20 14:15:18.204445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.189 ms 00:54:21.044 [2024-11-20 14:15:18.204457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.044 [2024-11-20 14:15:18.205408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.044 [2024-11-20 14:15:18.205458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:54:21.044 [2024-11-20 14:15:18.205474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.739 ms 00:54:21.044 [2024-11-20 14:15:18.205502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.044 [2024-11-20 14:15:18.310931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.044 [2024-11-20 14:15:18.311040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:54:21.044 [2024-11-20 14:15:18.311077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 105.385 ms 00:54:21.044 [2024-11-20 14:15:18.311092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.044 [2024-11-20 14:15:18.324725] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:54:21.044 [2024-11-20 14:15:18.325922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.044 [2024-11-20 14:15:18.325962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:54:21.044 [2024-11-20 14:15:18.325980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.707 ms 00:54:21.044 [2024-11-20 14:15:18.325994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.044 [2024-11-20 14:15:18.326136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.044 [2024-11-20 14:15:18.326166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:54:21.044 [2024-11-20 14:15:18.326181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:54:21.044 [2024-11-20 14:15:18.326194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.044 [2024-11-20 14:15:18.326275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.044 [2024-11-20 14:15:18.326298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:54:21.044 [2024-11-20 14:15:18.326313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:54:21.044 [2024-11-20 14:15:18.326326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.044 [2024-11-20 14:15:18.326360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.044 [2024-11-20 14:15:18.326375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:54:21.044 [2024-11-20 14:15:18.326393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:54:21.044 [2024-11-20 14:15:18.326407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.044 [2024-11-20 14:15:18.326503] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:54:21.044 [2024-11-20 14:15:18.326530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.044 [2024-11-20 14:15:18.326544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:54:21.044 [2024-11-20 14:15:18.326558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:54:21.044 [2024-11-20 14:15:18.326572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.303 [2024-11-20 14:15:18.367217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.303 [2024-11-20 14:15:18.367314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:54:21.303 [2024-11-20 14:15:18.367336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.608 ms 00:54:21.303 [2024-11-20 14:15:18.367351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.303 [2024-11-20 14:15:18.367526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.303 [2024-11-20 14:15:18.367551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:54:21.303 [2024-11-20 14:15:18.367567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:54:21.303 [2024-11-20 14:15:18.367580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.303 [2024-11-20 14:15:18.369062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4664.047 ms, result 0 00:54:21.303 [2024-11-20 14:15:18.383796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:21.303 [2024-11-20 14:15:18.399813] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:54:21.303 [2024-11-20 14:15:18.410034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:54:21.303 14:15:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:21.303 14:15:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:54:21.303 14:15:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:54:21.303 14:15:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:54:21.303 14:15:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:54:21.561 [2024-11-20 14:15:18.670093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.561 [2024-11-20 14:15:18.670163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:54:21.561 [2024-11-20 14:15:18.670186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:54:21.561 [2024-11-20 14:15:18.670206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.561 [2024-11-20 14:15:18.670242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.561 [2024-11-20 14:15:18.670258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:54:21.562 [2024-11-20 14:15:18.670273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:54:21.562 [2024-11-20 14:15:18.670286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.562 [2024-11-20 14:15:18.670315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:21.562 [2024-11-20 14:15:18.670329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:54:21.562 [2024-11-20 14:15:18.670344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:54:21.562 [2024-11-20 14:15:18.670358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:21.562 [2024-11-20 14:15:18.670440] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.341 ms, result 0 00:54:21.562 true 00:54:21.562 14:15:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:54:21.819 { 00:54:21.819 "name": "ftl", 00:54:21.819 "properties": [ 00:54:21.819 { 00:54:21.819 "name": "superblock_version", 00:54:21.819 "value": 5, 00:54:21.819 "read-only": true 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "name": "base_device", 00:54:21.819 "bands": [ 00:54:21.819 { 00:54:21.819 "id": 0, 00:54:21.819 "state": "CLOSED", 00:54:21.819 "validity": 1.0 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "id": 1, 00:54:21.819 "state": "CLOSED", 00:54:21.819 "validity": 1.0 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "id": 2, 00:54:21.819 "state": "CLOSED", 00:54:21.819 "validity": 0.007843137254901933 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "id": 3, 00:54:21.819 "state": "FREE", 00:54:21.819 "validity": 0.0 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "id": 4, 00:54:21.819 "state": "FREE", 00:54:21.819 "validity": 0.0 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "id": 5, 00:54:21.819 "state": "FREE", 00:54:21.819 "validity": 0.0 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "id": 6, 00:54:21.819 "state": "FREE", 00:54:21.819 "validity": 0.0 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "id": 7, 00:54:21.819 "state": "FREE", 00:54:21.819 "validity": 0.0 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "id": 8, 00:54:21.819 "state": "FREE", 00:54:21.819 "validity": 0.0 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "id": 9, 00:54:21.819 "state": "FREE", 00:54:21.819 "validity": 0.0 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "id": 10, 00:54:21.819 "state": "FREE", 00:54:21.819 "validity": 0.0 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "id": 11, 00:54:21.819 "state": "FREE", 00:54:21.819 "validity": 0.0 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "id": 12, 00:54:21.819 "state": "FREE", 00:54:21.819 "validity": 0.0 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "id": 13, 00:54:21.819 "state": "FREE", 00:54:21.819 "validity": 0.0 00:54:21.819 }, 00:54:21.819 { 00:54:21.819 "id": 14, 00:54:21.819 "state": "FREE", 00:54:21.820 "validity": 0.0 00:54:21.820 }, 00:54:21.820 { 00:54:21.820 "id": 15, 00:54:21.820 "state": "FREE", 00:54:21.820 "validity": 0.0 00:54:21.820 }, 00:54:21.820 { 00:54:21.820 "id": 16, 00:54:21.820 "state": "FREE", 00:54:21.820 "validity": 0.0 00:54:21.820 }, 00:54:21.820 { 00:54:21.820 "id": 17, 00:54:21.820 "state": "FREE", 00:54:21.820 "validity": 0.0 00:54:21.820 } 00:54:21.820 ], 00:54:21.820 "read-only": true 00:54:21.820 }, 00:54:21.820 { 00:54:21.820 "name": "cache_device", 00:54:21.820 "type": "bdev", 00:54:21.820 "chunks": [ 00:54:21.820 { 00:54:21.820 "id": 0, 00:54:21.820 "state": "INACTIVE", 00:54:21.820 "utilization": 0.0 00:54:21.820 }, 00:54:21.820 { 00:54:21.820 "id": 1, 00:54:21.820 "state": "OPEN", 00:54:21.820 "utilization": 0.0 00:54:21.820 }, 00:54:21.820 { 00:54:21.820 "id": 2, 00:54:21.820 "state": "OPEN", 00:54:21.820 "utilization": 0.0 00:54:21.820 }, 00:54:21.820 { 00:54:21.820 "id": 3, 00:54:21.820 "state": "FREE", 00:54:21.820 "utilization": 0.0 00:54:21.820 }, 00:54:21.820 { 00:54:21.820 "id": 4, 00:54:21.820 "state": "FREE", 00:54:21.820 "utilization": 0.0 00:54:21.820 } 00:54:21.820 ], 00:54:21.820 "read-only": true 00:54:21.820 }, 00:54:21.820 { 00:54:21.820 "name": "verbose_mode", 00:54:21.820 "value": true, 00:54:21.820 "unit": "", 00:54:21.820 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:54:21.820 }, 00:54:21.820 { 00:54:21.820 "name": "prep_upgrade_on_shutdown", 00:54:21.820 "value": false, 00:54:21.820 "unit": "", 00:54:21.820 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:54:21.820 } 00:54:21.820 ] 00:54:21.820 } 00:54:21.820 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:54:21.820 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:54:21.820 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:54:22.083 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:54:22.083 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:54:22.083 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:54:22.083 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:54:22.083 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:54:22.348 Validate MD5 checksum, iteration 1 00:54:22.348 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:54:22.348 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:54:22.348 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:54:22.348 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:54:22.348 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:54:22.348 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:54:22.348 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:54:22.348 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:54:22.348 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:54:22.348 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:54:22.348 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:54:22.348 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:54:22.348 14:15:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:54:22.348 [2024-11-20 14:15:19.620895] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:54:22.348 [2024-11-20 14:15:19.621029] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84398 ] 00:54:22.606 [2024-11-20 14:15:19.798581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:22.606 [2024-11-20 14:15:19.923606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:24.509  [2024-11-20T14:15:22.766Z] Copying: 534/1024 [MB] (534 MBps) [2024-11-20T14:15:24.668Z] Copying: 1024/1024 [MB] (average 517 MBps) 00:54:27.345 00:54:27.345 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:54:27.345 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:54:29.963 14:15:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:54:29.963 Validate MD5 checksum, iteration 2 00:54:29.964 14:15:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6ecf9918a4f4f5cf84e76b91294d968e 00:54:29.964 14:15:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6ecf9918a4f4f5cf84e76b91294d968e != \6\e\c\f\9\9\1\8\a\4\f\4\f\5\c\f\8\4\e\7\6\b\9\1\2\9\4\d\9\6\8\e ]] 00:54:29.964 14:15:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:54:29.964 14:15:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:54:29.964 14:15:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:54:29.964 14:15:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:54:29.964 14:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:54:29.964 14:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:54:29.964 14:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:54:29.964 14:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:54:29.964 14:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:54:29.964 [2024-11-20 14:15:26.823580] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:54:29.964 [2024-11-20 14:15:26.824435] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84478 ] 00:54:29.964 [2024-11-20 14:15:27.090725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:29.964 [2024-11-20 14:15:27.231114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:31.868  [2024-11-20T14:15:30.125Z] Copying: 417/1024 [MB] (417 MBps) [2024-11-20T14:15:30.384Z] Copying: 904/1024 [MB] (487 MBps) [2024-11-20T14:15:32.285Z] Copying: 1024/1024 [MB] (average 457 MBps) 00:54:34.962 00:54:34.962 14:15:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:54:34.962 14:15:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=040f3c24af0bfa6674b0b689371b47cb 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 040f3c24af0bfa6674b0b689371b47cb != \0\4\0\f\3\c\2\4\a\f\0\b\f\a\6\6\7\4\b\0\b\6\8\9\3\7\1\b\4\7\c\b ]] 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84295 ]] 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84295 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84561 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84561 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84561 ']' 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:36.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:36.865 14:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:54:37.123 [2024-11-20 14:15:34.276354] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:54:37.123 [2024-11-20 14:15:34.276644] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84561 ] 00:54:37.123 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84295 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:54:37.382 [2024-11-20 14:15:34.499012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:37.382 [2024-11-20 14:15:34.669121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:38.759 [2024-11-20 14:15:35.792170] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:54:38.759 [2024-11-20 14:15:35.792268] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:54:38.759 [2024-11-20 14:15:35.943888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.759 [2024-11-20 14:15:35.943995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:54:38.759 [2024-11-20 14:15:35.944019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:54:38.759 [2024-11-20 14:15:35.944033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:38.759 [2024-11-20 14:15:35.944132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.759 [2024-11-20 14:15:35.944150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:54:38.759 [2024-11-20 14:15:35.944164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:54:38.759 [2024-11-20 14:15:35.944177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:38.759 [2024-11-20 14:15:35.944208] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:54:38.759 [2024-11-20 14:15:35.945418] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:54:38.759 [2024-11-20 14:15:35.945466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.759 [2024-11-20 14:15:35.945495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:54:38.759 [2024-11-20 14:15:35.945511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.264 ms 00:54:38.759 [2024-11-20 14:15:35.945524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:38.759 [2024-11-20 14:15:35.946119] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:54:38.759 [2024-11-20 14:15:35.975898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.759 [2024-11-20 14:15:35.976003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:54:38.759 [2024-11-20 14:15:35.976026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.772 ms 00:54:38.759 [2024-11-20 14:15:35.976040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:38.759 [2024-11-20 14:15:35.994285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.759 [2024-11-20 14:15:35.994395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:54:38.759 [2024-11-20 14:15:35.994420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:54:38.759 [2024-11-20 14:15:35.994432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:38.759 [2024-11-20 14:15:35.995150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.759 [2024-11-20 14:15:35.995189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:54:38.759 [2024-11-20 14:15:35.995204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.556 ms 00:54:38.759 [2024-11-20 14:15:35.995217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:38.759 [2024-11-20 14:15:35.995305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.759 [2024-11-20 14:15:35.995322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:54:38.759 [2024-11-20 14:15:35.995336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:54:38.759 [2024-11-20 14:15:35.995348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:38.759 [2024-11-20 14:15:35.995389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.759 [2024-11-20 14:15:35.995412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:54:38.759 [2024-11-20 14:15:35.995426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:54:38.759 [2024-11-20 14:15:35.995438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:38.759 [2024-11-20 14:15:35.995521] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:54:38.759 [2024-11-20 14:15:36.001305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.759 [2024-11-20 14:15:36.001376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:54:38.759 [2024-11-20 14:15:36.001393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.838 ms 00:54:38.759 [2024-11-20 14:15:36.001407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:38.759 [2024-11-20 14:15:36.001465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.759 [2024-11-20 14:15:36.001492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:54:38.759 [2024-11-20 14:15:36.001506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:54:38.759 [2024-11-20 14:15:36.001519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:38.759 [2024-11-20 14:15:36.001589] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:54:38.759 [2024-11-20 14:15:36.001619] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:54:38.759 [2024-11-20 14:15:36.001662] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:54:38.759 [2024-11-20 14:15:36.001687] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:54:38.759 [2024-11-20 14:15:36.001799] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:54:38.759 [2024-11-20 14:15:36.001815] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:54:38.759 [2024-11-20 14:15:36.001832] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:54:38.759 [2024-11-20 14:15:36.001848] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:54:38.759 [2024-11-20 14:15:36.001863] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:54:38.759 [2024-11-20 14:15:36.001877] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:54:38.759 [2024-11-20 14:15:36.001889] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:54:38.759 [2024-11-20 14:15:36.001902] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:54:38.759 [2024-11-20 14:15:36.001913] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:54:38.759 [2024-11-20 14:15:36.001927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.759 [2024-11-20 14:15:36.001943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:54:38.759 [2024-11-20 14:15:36.001956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.342 ms 00:54:38.759 [2024-11-20 14:15:36.001968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:38.759 [2024-11-20 14:15:36.002063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.759 [2024-11-20 14:15:36.002084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:54:38.759 [2024-11-20 14:15:36.002097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:54:38.759 [2024-11-20 14:15:36.002110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:38.759 [2024-11-20 14:15:36.002224] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:54:38.759 [2024-11-20 14:15:36.002245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:54:38.759 [2024-11-20 14:15:36.002263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:54:38.759 [2024-11-20 14:15:36.002276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:38.759 [2024-11-20 14:15:36.002289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:54:38.759 [2024-11-20 14:15:36.002301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:54:38.760 [2024-11-20 14:15:36.002313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:54:38.760 [2024-11-20 14:15:36.002325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:54:38.760 [2024-11-20 14:15:36.002336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:54:38.760 [2024-11-20 14:15:36.002347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:38.760 [2024-11-20 14:15:36.002359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:54:38.760 [2024-11-20 14:15:36.002371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:54:38.760 [2024-11-20 14:15:36.002382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:38.760 [2024-11-20 14:15:36.002394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:54:38.760 [2024-11-20 14:15:36.002405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:54:38.760 [2024-11-20 14:15:36.002420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:38.760 [2024-11-20 14:15:36.002432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:54:38.760 [2024-11-20 14:15:36.002443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:54:38.760 [2024-11-20 14:15:36.002454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:38.760 [2024-11-20 14:15:36.002466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:54:38.760 [2024-11-20 14:15:36.002497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:54:38.760 [2024-11-20 14:15:36.002510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:54:38.760 [2024-11-20 14:15:36.002522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:54:38.760 [2024-11-20 14:15:36.002549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:54:38.760 [2024-11-20 14:15:36.002560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:54:38.760 [2024-11-20 14:15:36.002572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:54:38.760 [2024-11-20 14:15:36.002584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:54:38.760 [2024-11-20 14:15:36.002595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:54:38.760 [2024-11-20 14:15:36.002606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:54:38.760 [2024-11-20 14:15:36.002617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:54:38.760 [2024-11-20 14:15:36.002629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:54:38.760 [2024-11-20 14:15:36.002640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:54:38.760 [2024-11-20 14:15:36.002651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:54:38.760 [2024-11-20 14:15:36.002663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:38.760 [2024-11-20 14:15:36.002674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:54:38.760 [2024-11-20 14:15:36.002685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:54:38.760 [2024-11-20 14:15:36.002697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:38.760 [2024-11-20 14:15:36.002708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:54:38.760 [2024-11-20 14:15:36.002720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:54:38.760 [2024-11-20 14:15:36.002731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:38.760 [2024-11-20 14:15:36.002742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:54:38.760 [2024-11-20 14:15:36.002753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:54:38.760 [2024-11-20 14:15:36.002764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:38.760 [2024-11-20 14:15:36.002775] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:54:38.760 [2024-11-20 14:15:36.002788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:54:38.760 [2024-11-20 14:15:36.002800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:54:38.760 [2024-11-20 14:15:36.002812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:54:38.760 [2024-11-20 14:15:36.002827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:54:38.760 [2024-11-20 14:15:36.002839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:54:38.760 [2024-11-20 14:15:36.002851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:54:38.760 [2024-11-20 14:15:36.002863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:54:38.760 [2024-11-20 14:15:36.002874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:54:38.760 [2024-11-20 14:15:36.002886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:54:38.760 [2024-11-20 14:15:36.002900] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:54:38.760 [2024-11-20 14:15:36.002915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:54:38.760 [2024-11-20 14:15:36.002929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:54:38.760 [2024-11-20 14:15:36.002942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:54:38.760 [2024-11-20 14:15:36.002955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:54:38.760 [2024-11-20 14:15:36.002968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:54:38.760 [2024-11-20 14:15:36.002982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:54:38.760 [2024-11-20 14:15:36.002995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:54:38.760 [2024-11-20 14:15:36.003007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:54:38.760 [2024-11-20 14:15:36.003020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:54:38.760 [2024-11-20 14:15:36.003033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:54:38.760 [2024-11-20 14:15:36.003046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:54:38.760 [2024-11-20 14:15:36.003058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:54:38.760 [2024-11-20 14:15:36.003070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:54:38.760 [2024-11-20 14:15:36.003083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:54:38.760 [2024-11-20 14:15:36.003096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:54:38.760 [2024-11-20 14:15:36.003108] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:54:38.760 [2024-11-20 14:15:36.003122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:54:38.760 [2024-11-20 14:15:36.003141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:54:38.760 [2024-11-20 14:15:36.003154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:54:38.760 [2024-11-20 14:15:36.003167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:54:38.760 [2024-11-20 14:15:36.003180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:54:38.760 [2024-11-20 14:15:36.003194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.760 [2024-11-20 14:15:36.003207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:54:38.760 [2024-11-20 14:15:36.003219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.037 ms 00:54:38.760 [2024-11-20 14:15:36.003231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:38.760 [2024-11-20 14:15:36.048641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.760 [2024-11-20 14:15:36.048717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:54:38.760 [2024-11-20 14:15:36.048739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.331 ms 00:54:38.760 [2024-11-20 14:15:36.048756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:38.760 [2024-11-20 14:15:36.048833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:38.760 [2024-11-20 14:15:36.048851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:54:38.760 [2024-11-20 14:15:36.048868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:54:38.760 [2024-11-20 14:15:36.048884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.019 [2024-11-20 14:15:36.104289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.019 [2024-11-20 14:15:36.104372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:54:39.019 [2024-11-20 14:15:36.104394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.276 ms 00:54:39.019 [2024-11-20 14:15:36.104409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.019 [2024-11-20 14:15:36.104501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.019 [2024-11-20 14:15:36.104517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:54:39.019 [2024-11-20 14:15:36.104531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:54:39.019 [2024-11-20 14:15:36.104551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.019 [2024-11-20 14:15:36.104729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.019 [2024-11-20 14:15:36.104746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:54:39.019 [2024-11-20 14:15:36.104760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.089 ms 00:54:39.019 [2024-11-20 14:15:36.104772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.019 [2024-11-20 14:15:36.104824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.019 [2024-11-20 14:15:36.104869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:54:39.019 [2024-11-20 14:15:36.104882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:54:39.019 [2024-11-20 14:15:36.104894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.019 [2024-11-20 14:15:36.128676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.019 [2024-11-20 14:15:36.128751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:54:39.019 [2024-11-20 14:15:36.128772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.745 ms 00:54:39.019 [2024-11-20 14:15:36.128791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.019 [2024-11-20 14:15:36.129015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.019 [2024-11-20 14:15:36.129047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:54:39.019 [2024-11-20 14:15:36.129062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:54:39.019 [2024-11-20 14:15:36.129075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.019 [2024-11-20 14:15:36.172168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.019 [2024-11-20 14:15:36.172285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:54:39.019 [2024-11-20 14:15:36.172316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.047 ms 00:54:39.019 [2024-11-20 14:15:36.172332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.019 [2024-11-20 14:15:36.190890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.019 [2024-11-20 14:15:36.190984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:54:39.019 [2024-11-20 14:15:36.191016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.886 ms 00:54:39.019 [2024-11-20 14:15:36.191032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.019 [2024-11-20 14:15:36.302972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.019 [2024-11-20 14:15:36.303091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:54:39.019 [2024-11-20 14:15:36.303125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 111.802 ms 00:54:39.019 [2024-11-20 14:15:36.303139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.019 [2024-11-20 14:15:36.303391] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:54:39.019 [2024-11-20 14:15:36.303578] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:54:39.019 [2024-11-20 14:15:36.303819] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:54:39.019 [2024-11-20 14:15:36.304106] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:54:39.019 [2024-11-20 14:15:36.304146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.019 [2024-11-20 14:15:36.304169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:54:39.019 [2024-11-20 14:15:36.304196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.912 ms 00:54:39.019 [2024-11-20 14:15:36.304220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.020 [2024-11-20 14:15:36.304373] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:54:39.020 [2024-11-20 14:15:36.304400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.020 [2024-11-20 14:15:36.304429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:54:39.020 [2024-11-20 14:15:36.304452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:54:39.020 [2024-11-20 14:15:36.304468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.020 [2024-11-20 14:15:36.335341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.020 [2024-11-20 14:15:36.335442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:54:39.020 [2024-11-20 14:15:36.335464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.804 ms 00:54:39.020 [2024-11-20 14:15:36.335505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.278 [2024-11-20 14:15:36.353831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.278 [2024-11-20 14:15:36.353956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:54:39.278 [2024-11-20 14:15:36.353986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:54:39.278 [2024-11-20 14:15:36.354008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.278 [2024-11-20 14:15:36.354214] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:54:39.278 [2024-11-20 14:15:36.354467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.278 [2024-11-20 14:15:36.354528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:54:39.278 [2024-11-20 14:15:36.354552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.256 ms 00:54:39.278 [2024-11-20 14:15:36.354573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.536 [2024-11-20 14:15:36.802516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.536 [2024-11-20 14:15:36.802600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:54:39.536 [2024-11-20 14:15:36.802623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 445.779 ms 00:54:39.536 [2024-11-20 14:15:36.802638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.536 [2024-11-20 14:15:36.809359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.536 [2024-11-20 14:15:36.809425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:54:39.536 [2024-11-20 14:15:36.809443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.005 ms 00:54:39.536 [2024-11-20 14:15:36.809456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.536 [2024-11-20 14:15:36.809903] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:54:39.536 [2024-11-20 14:15:36.809942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.536 [2024-11-20 14:15:36.809956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:54:39.536 [2024-11-20 14:15:36.809970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.421 ms 00:54:39.536 [2024-11-20 14:15:36.809983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.536 [2024-11-20 14:15:36.810070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.536 [2024-11-20 14:15:36.810086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:54:39.536 [2024-11-20 14:15:36.810100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:54:39.536 [2024-11-20 14:15:36.810112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:39.536 [2024-11-20 14:15:36.810164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 455.956 ms, result 0 00:54:39.536 [2024-11-20 14:15:36.810219] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:54:39.536 [2024-11-20 14:15:36.810326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:39.536 [2024-11-20 14:15:36.810339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:54:39.536 [2024-11-20 14:15:36.810351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.108 ms 00:54:39.536 [2024-11-20 14:15:36.810362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.105 [2024-11-20 14:15:37.254691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:40.105 [2024-11-20 14:15:37.254797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:54:40.105 [2024-11-20 14:15:37.254821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 442.603 ms 00:54:40.105 [2024-11-20 14:15:37.254837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.105 [2024-11-20 14:15:37.261276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:40.105 [2024-11-20 14:15:37.261350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:54:40.105 [2024-11-20 14:15:37.261369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.992 ms 00:54:40.105 [2024-11-20 14:15:37.261382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.105 [2024-11-20 14:15:37.261968] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:54:40.106 [2024-11-20 14:15:37.262009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:40.106 [2024-11-20 14:15:37.262023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:54:40.106 [2024-11-20 14:15:37.262038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.589 ms 00:54:40.106 [2024-11-20 14:15:37.262051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.106 [2024-11-20 14:15:37.262090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:40.106 [2024-11-20 14:15:37.262105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:54:40.106 [2024-11-20 14:15:37.262118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:54:40.106 [2024-11-20 14:15:37.262129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.106 [2024-11-20 14:15:37.262183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 451.954 ms, result 0 00:54:40.106 [2024-11-20 14:15:37.262240] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:54:40.106 [2024-11-20 14:15:37.262257] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:54:40.106 [2024-11-20 14:15:37.262272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:40.106 [2024-11-20 14:15:37.262285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:54:40.106 [2024-11-20 14:15:37.262300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 908.088 ms 00:54:40.106 [2024-11-20 14:15:37.262312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.106 [2024-11-20 14:15:37.262353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:40.106 [2024-11-20 14:15:37.262368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:54:40.106 [2024-11-20 14:15:37.262386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:54:40.106 [2024-11-20 14:15:37.262399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.106 [2024-11-20 14:15:37.278390] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:54:40.106 [2024-11-20 14:15:37.278669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:40.106 [2024-11-20 14:15:37.278694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:54:40.106 [2024-11-20 14:15:37.278712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.247 ms 00:54:40.106 [2024-11-20 14:15:37.278724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.106 [2024-11-20 14:15:37.279546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:40.106 [2024-11-20 14:15:37.279581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:54:40.106 [2024-11-20 14:15:37.279602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.677 ms 00:54:40.106 [2024-11-20 14:15:37.279615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.106 [2024-11-20 14:15:37.282113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:40.106 [2024-11-20 14:15:37.282147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:54:40.106 [2024-11-20 14:15:37.282162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.472 ms 00:54:40.106 [2024-11-20 14:15:37.282175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.106 [2024-11-20 14:15:37.282232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:40.106 [2024-11-20 14:15:37.282247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:54:40.106 [2024-11-20 14:15:37.282260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:54:40.106 [2024-11-20 14:15:37.282278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.106 [2024-11-20 14:15:37.282408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:40.106 [2024-11-20 14:15:37.282423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:54:40.106 [2024-11-20 14:15:37.282436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:54:40.106 [2024-11-20 14:15:37.282448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.106 [2024-11-20 14:15:37.282476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:40.106 [2024-11-20 14:15:37.282519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:54:40.106 [2024-11-20 14:15:37.282533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:54:40.106 [2024-11-20 14:15:37.282545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.106 [2024-11-20 14:15:37.282590] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:54:40.106 [2024-11-20 14:15:37.282606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:40.106 [2024-11-20 14:15:37.282618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:54:40.106 [2024-11-20 14:15:37.282631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:54:40.106 [2024-11-20 14:15:37.282643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.106 [2024-11-20 14:15:37.282711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:40.106 [2024-11-20 14:15:37.282726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:54:40.106 [2024-11-20 14:15:37.282740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:54:40.106 [2024-11-20 14:15:37.282752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:40.106 [2024-11-20 14:15:37.284072] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1339.614 ms, result 0 00:54:40.106 [2024-11-20 14:15:37.299378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:40.106 [2024-11-20 14:15:37.315407] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:54:40.106 [2024-11-20 14:15:37.326000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:54:40.106 Validate MD5 checksum, iteration 1 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:54:40.106 14:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:54:40.366 [2024-11-20 14:15:37.457814] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:54:40.366 [2024-11-20 14:15:37.458018] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84596 ] 00:54:40.366 [2024-11-20 14:15:37.679668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:40.627 [2024-11-20 14:15:37.851582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:42.527  [2024-11-20T14:15:40.788Z] Copying: 441/1024 [MB] (441 MBps) [2024-11-20T14:15:41.104Z] Copying: 908/1024 [MB] (467 MBps) [2024-11-20T14:15:43.651Z] Copying: 1024/1024 [MB] (average 461 MBps) 00:54:46.328 00:54:46.328 14:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:54:46.328 14:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:54:48.857 14:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:54:48.857 14:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6ecf9918a4f4f5cf84e76b91294d968e 00:54:48.857 14:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6ecf9918a4f4f5cf84e76b91294d968e != \6\e\c\f\9\9\1\8\a\4\f\4\f\5\c\f\8\4\e\7\6\b\9\1\2\9\4\d\9\6\8\e ]] 00:54:48.857 Validate MD5 checksum, iteration 2 00:54:48.857 14:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:54:48.857 14:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:54:48.857 14:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:54:48.857 14:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:54:48.857 14:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:54:48.857 14:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:54:48.857 14:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:54:48.857 14:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:54:48.857 14:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:54:48.857 [2024-11-20 14:15:45.900151] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:54:48.857 [2024-11-20 14:15:45.900308] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84680 ] 00:54:48.857 [2024-11-20 14:15:46.087863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:49.116 [2024-11-20 14:15:46.276605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:51.022  [2024-11-20T14:15:49.283Z] Copying: 434/1024 [MB] (434 MBps) [2024-11-20T14:15:49.283Z] Copying: 956/1024 [MB] (522 MBps) [2024-11-20T14:15:51.185Z] Copying: 1024/1024 [MB] (average 480 MBps) 00:54:53.862 00:54:53.862 14:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:54:53.862 14:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:54:55.766 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:54:55.766 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=040f3c24af0bfa6674b0b689371b47cb 00:54:55.766 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 040f3c24af0bfa6674b0b689371b47cb != \0\4\0\f\3\c\2\4\a\f\0\b\f\a\6\6\7\4\b\0\b\6\8\9\3\7\1\b\4\7\c\b ]] 00:54:55.766 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:54:55.766 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:54:55.766 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:54:55.766 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:54:55.766 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:54:55.766 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84561 ]] 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84561 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84561 ']' 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84561 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84561 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:56.024 killing process with pid 84561 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84561' 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84561 00:54:56.024 14:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84561 00:54:57.474 [2024-11-20 14:15:54.574782] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:54:57.474 [2024-11-20 14:15:54.600089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.600181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:54:57.474 [2024-11-20 14:15:54.600201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:54:57.474 [2024-11-20 14:15:54.600214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.474 [2024-11-20 14:15:54.600247] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:54:57.474 [2024-11-20 14:15:54.605068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.605139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:54:57.474 [2024-11-20 14:15:54.605186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.798 ms 00:54:57.474 [2024-11-20 14:15:54.605199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.474 [2024-11-20 14:15:54.605552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.605584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:54:57.474 [2024-11-20 14:15:54.605598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.313 ms 00:54:57.474 [2024-11-20 14:15:54.605610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.474 [2024-11-20 14:15:54.606962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.607013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:54:57.474 [2024-11-20 14:15:54.607031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.329 ms 00:54:57.474 [2024-11-20 14:15:54.607043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.474 [2024-11-20 14:15:54.608268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.608309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:54:57.474 [2024-11-20 14:15:54.608324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.171 ms 00:54:57.474 [2024-11-20 14:15:54.608338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.474 [2024-11-20 14:15:54.627086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.627176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:54:57.474 [2024-11-20 14:15:54.627196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.691 ms 00:54:57.474 [2024-11-20 14:15:54.627222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.474 [2024-11-20 14:15:54.637340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.637427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:54:57.474 [2024-11-20 14:15:54.637447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.025 ms 00:54:57.474 [2024-11-20 14:15:54.637474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.474 [2024-11-20 14:15:54.637641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.637659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:54:57.474 [2024-11-20 14:15:54.637674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:54:57.474 [2024-11-20 14:15:54.637687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.474 [2024-11-20 14:15:54.656325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.656427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:54:57.474 [2024-11-20 14:15:54.656449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.586 ms 00:54:57.474 [2024-11-20 14:15:54.656461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.474 [2024-11-20 14:15:54.674812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.674898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:54:57.474 [2024-11-20 14:15:54.674918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.270 ms 00:54:57.474 [2024-11-20 14:15:54.674930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.474 [2024-11-20 14:15:54.693706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.693795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:54:57.474 [2024-11-20 14:15:54.693814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.688 ms 00:54:57.474 [2024-11-20 14:15:54.693844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.474 [2024-11-20 14:15:54.712086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.712172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:54:57.474 [2024-11-20 14:15:54.712193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.111 ms 00:54:57.474 [2024-11-20 14:15:54.712223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.474 [2024-11-20 14:15:54.712283] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:54:57.474 [2024-11-20 14:15:54.712307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:54:57.474 [2024-11-20 14:15:54.712324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:54:57.474 [2024-11-20 14:15:54.712337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:54:57.474 [2024-11-20 14:15:54.712351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:54:57.474 [2024-11-20 14:15:54.712568] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:54:57.474 [2024-11-20 14:15:54.712581] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c5110b1b-b875-4f10-bb21-e6aa2465c979 00:54:57.474 [2024-11-20 14:15:54.712595] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:54:57.474 [2024-11-20 14:15:54.712607] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:54:57.474 [2024-11-20 14:15:54.712619] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:54:57.474 [2024-11-20 14:15:54.712633] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:54:57.474 [2024-11-20 14:15:54.712645] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:54:57.474 [2024-11-20 14:15:54.712657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:54:57.474 [2024-11-20 14:15:54.712669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:54:57.474 [2024-11-20 14:15:54.712681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:54:57.474 [2024-11-20 14:15:54.712704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:54:57.474 [2024-11-20 14:15:54.712716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.712743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:54:57.474 [2024-11-20 14:15:54.712762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.434 ms 00:54:57.474 [2024-11-20 14:15:54.712774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.474 [2024-11-20 14:15:54.736387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.736469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:54:57.474 [2024-11-20 14:15:54.736514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.560 ms 00:54:57.474 [2024-11-20 14:15:54.736527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.474 [2024-11-20 14:15:54.737231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:54:57.474 [2024-11-20 14:15:54.737256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:54:57.474 [2024-11-20 14:15:54.737269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.654 ms 00:54:57.474 [2024-11-20 14:15:54.737281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.733 [2024-11-20 14:15:54.815543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:57.733 [2024-11-20 14:15:54.815615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:54:57.733 [2024-11-20 14:15:54.815635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:57.733 [2024-11-20 14:15:54.815648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.733 [2024-11-20 14:15:54.815717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:57.733 [2024-11-20 14:15:54.815730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:54:57.733 [2024-11-20 14:15:54.815743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:57.733 [2024-11-20 14:15:54.815755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.733 [2024-11-20 14:15:54.815901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:57.733 [2024-11-20 14:15:54.815918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:54:57.733 [2024-11-20 14:15:54.815931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:57.733 [2024-11-20 14:15:54.815943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.733 [2024-11-20 14:15:54.815966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:57.733 [2024-11-20 14:15:54.815985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:54:57.733 [2024-11-20 14:15:54.815997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:57.733 [2024-11-20 14:15:54.816009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.733 [2024-11-20 14:15:54.963804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:57.733 [2024-11-20 14:15:54.963886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:54:57.733 [2024-11-20 14:15:54.963906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:57.733 [2024-11-20 14:15:54.963919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.992 [2024-11-20 14:15:55.085964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:57.992 [2024-11-20 14:15:55.086052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:54:57.992 [2024-11-20 14:15:55.086072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:57.992 [2024-11-20 14:15:55.086098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.992 [2024-11-20 14:15:55.086232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:57.992 [2024-11-20 14:15:55.086247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:54:57.992 [2024-11-20 14:15:55.086265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:57.992 [2024-11-20 14:15:55.086278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.992 [2024-11-20 14:15:55.086342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:57.992 [2024-11-20 14:15:55.086356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:54:57.992 [2024-11-20 14:15:55.086380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:57.992 [2024-11-20 14:15:55.086404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.992 [2024-11-20 14:15:55.086591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:57.992 [2024-11-20 14:15:55.086609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:54:57.992 [2024-11-20 14:15:55.086622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:57.992 [2024-11-20 14:15:55.086634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.992 [2024-11-20 14:15:55.086680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:57.992 [2024-11-20 14:15:55.086701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:54:57.992 [2024-11-20 14:15:55.086715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:57.992 [2024-11-20 14:15:55.086732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.992 [2024-11-20 14:15:55.086779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:57.992 [2024-11-20 14:15:55.086793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:54:57.992 [2024-11-20 14:15:55.086805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:57.992 [2024-11-20 14:15:55.086817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.992 [2024-11-20 14:15:55.086872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:54:57.992 [2024-11-20 14:15:55.086892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:54:57.992 [2024-11-20 14:15:55.086910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:54:57.992 [2024-11-20 14:15:55.086940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:54:57.992 [2024-11-20 14:15:55.087080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 486.953 ms, result 0 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:54:59.368 Remove shared memory files 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84295 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:54:59.368 00:54:59.368 real 1m41.279s 00:54:59.368 user 2m22.944s 00:54:59.368 sys 0m27.158s 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:59.368 14:15:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:54:59.368 ************************************ 00:54:59.368 END TEST ftl_upgrade_shutdown 00:54:59.368 ************************************ 00:54:59.368 14:15:56 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:54:59.368 14:15:56 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:54:59.368 14:15:56 ftl -- ftl/ftl.sh@14 -- # killprocess 77273 00:54:59.368 14:15:56 ftl -- common/autotest_common.sh@954 -- # '[' -z 77273 ']' 00:54:59.368 14:15:56 ftl -- common/autotest_common.sh@958 -- # kill -0 77273 00:54:59.368 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77273) - No such process 00:54:59.368 Process with pid 77273 is not found 00:54:59.368 14:15:56 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77273 is not found' 00:54:59.368 14:15:56 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:54:59.368 14:15:56 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84822 00:54:59.368 14:15:56 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:54:59.368 14:15:56 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84822 00:54:59.368 14:15:56 ftl -- common/autotest_common.sh@835 -- # '[' -z 84822 ']' 00:54:59.368 14:15:56 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:59.368 14:15:56 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:59.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:59.368 14:15:56 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:59.368 14:15:56 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:59.368 14:15:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:54:59.627 [2024-11-20 14:15:56.776677] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:54:59.627 [2024-11-20 14:15:56.776836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84822 ] 00:54:59.890 [2024-11-20 14:15:56.962984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:59.890 [2024-11-20 14:15:57.102464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:01.280 14:15:58 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:01.280 14:15:58 ftl -- common/autotest_common.sh@868 -- # return 0 00:55:01.281 14:15:58 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:55:01.539 nvme0n1 00:55:01.539 14:15:58 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:55:01.539 14:15:58 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:55:01.539 14:15:58 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:55:01.798 14:15:58 ftl -- ftl/common.sh@28 -- # stores=2f64fd16-cf5c-4ddf-9880-773b310174d2 00:55:01.798 14:15:58 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:55:01.798 14:15:58 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2f64fd16-cf5c-4ddf-9880-773b310174d2 00:55:02.055 14:15:59 ftl -- ftl/ftl.sh@23 -- # killprocess 84822 00:55:02.055 14:15:59 ftl -- common/autotest_common.sh@954 -- # '[' -z 84822 ']' 00:55:02.055 14:15:59 ftl -- common/autotest_common.sh@958 -- # kill -0 84822 00:55:02.055 14:15:59 ftl -- common/autotest_common.sh@959 -- # uname 00:55:02.055 14:15:59 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:02.055 14:15:59 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84822 00:55:02.055 14:15:59 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:02.055 killing process with pid 84822 00:55:02.055 14:15:59 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:02.055 14:15:59 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84822' 00:55:02.055 14:15:59 ftl -- common/autotest_common.sh@973 -- # kill 84822 00:55:02.055 14:15:59 ftl -- common/autotest_common.sh@978 -- # wait 84822 00:55:05.337 14:16:02 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:55:05.337 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:05.337 Waiting for block devices as requested 00:55:05.337 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:55:05.596 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:55:05.596 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:55:05.596 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:55:10.866 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:55:10.866 14:16:07 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:55:10.866 Remove shared memory files 00:55:10.866 14:16:07 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:55:10.866 14:16:07 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:55:10.866 14:16:07 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:55:10.866 14:16:07 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:55:10.866 14:16:07 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:55:10.866 14:16:07 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:55:10.866 ************************************ 00:55:10.866 END TEST ftl 00:55:10.866 ************************************ 00:55:10.866 00:55:10.866 real 11m19.066s 00:55:10.866 user 14m8.145s 00:55:10.866 sys 1m39.919s 00:55:10.866 14:16:07 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:10.866 14:16:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:55:10.866 14:16:08 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:55:10.866 14:16:08 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:55:10.866 14:16:08 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:55:10.866 14:16:08 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:55:10.866 14:16:08 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:55:10.866 14:16:08 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:55:10.866 14:16:08 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:55:10.866 14:16:08 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:55:10.866 14:16:08 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:55:10.866 14:16:08 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:55:10.866 14:16:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:10.866 14:16:08 -- common/autotest_common.sh@10 -- # set +x 00:55:10.866 14:16:08 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:55:10.866 14:16:08 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:55:10.866 14:16:08 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:55:10.866 14:16:08 -- common/autotest_common.sh@10 -- # set +x 00:55:12.768 INFO: APP EXITING 00:55:12.768 INFO: killing all VMs 00:55:12.768 INFO: killing vhost app 00:55:12.768 INFO: EXIT DONE 00:55:13.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:13.594 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:55:13.594 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:55:13.852 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:55:13.852 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:55:14.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:14.675 Cleaning 00:55:14.676 Removing: /var/run/dpdk/spdk0/config 00:55:14.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:55:14.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:55:14.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:55:14.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:55:14.676 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:55:14.676 Removing: /var/run/dpdk/spdk0/hugepage_info 00:55:14.676 Removing: /var/run/dpdk/spdk0 00:55:14.676 Removing: /var/run/dpdk/spdk_pid57764 00:55:14.676 Removing: /var/run/dpdk/spdk_pid58016 00:55:14.676 Removing: /var/run/dpdk/spdk_pid58250 00:55:14.676 Removing: /var/run/dpdk/spdk_pid58360 00:55:14.676 Removing: /var/run/dpdk/spdk_pid58416 00:55:14.676 Removing: /var/run/dpdk/spdk_pid58555 00:55:14.676 Removing: /var/run/dpdk/spdk_pid58584 00:55:14.676 Removing: /var/run/dpdk/spdk_pid58794 00:55:14.676 Removing: /var/run/dpdk/spdk_pid58917 00:55:14.676 Removing: /var/run/dpdk/spdk_pid59030 00:55:14.676 Removing: /var/run/dpdk/spdk_pid59158 00:55:14.676 Removing: /var/run/dpdk/spdk_pid59271 00:55:14.676 Removing: /var/run/dpdk/spdk_pid59311 00:55:14.676 Removing: /var/run/dpdk/spdk_pid59347 00:55:14.676 Removing: /var/run/dpdk/spdk_pid59424 00:55:14.676 Removing: /var/run/dpdk/spdk_pid59536 00:55:14.676 Removing: /var/run/dpdk/spdk_pid60012 00:55:14.676 Removing: /var/run/dpdk/spdk_pid60097 00:55:14.676 Removing: /var/run/dpdk/spdk_pid60172 00:55:14.676 Removing: /var/run/dpdk/spdk_pid60194 00:55:14.676 Removing: /var/run/dpdk/spdk_pid60358 00:55:14.676 Removing: /var/run/dpdk/spdk_pid60380 00:55:14.676 Removing: /var/run/dpdk/spdk_pid60549 00:55:14.676 Removing: /var/run/dpdk/spdk_pid60566 00:55:14.676 Removing: /var/run/dpdk/spdk_pid60641 00:55:14.676 Removing: /var/run/dpdk/spdk_pid60659 00:55:14.676 Removing: /var/run/dpdk/spdk_pid60729 00:55:14.676 Removing: /var/run/dpdk/spdk_pid60752 00:55:14.676 Removing: /var/run/dpdk/spdk_pid60964 00:55:14.676 Removing: /var/run/dpdk/spdk_pid61006 00:55:14.676 Removing: /var/run/dpdk/spdk_pid61095 00:55:14.676 Removing: /var/run/dpdk/spdk_pid61289 00:55:14.676 Removing: /var/run/dpdk/spdk_pid61395 00:55:14.676 Removing: /var/run/dpdk/spdk_pid61443 00:55:14.676 Removing: /var/run/dpdk/spdk_pid61932 00:55:14.676 Removing: /var/run/dpdk/spdk_pid62041 00:55:14.676 Removing: /var/run/dpdk/spdk_pid62162 00:55:14.676 Removing: /var/run/dpdk/spdk_pid62215 00:55:14.676 Removing: /var/run/dpdk/spdk_pid62246 00:55:14.676 Removing: /var/run/dpdk/spdk_pid62330 00:55:14.676 Removing: /var/run/dpdk/spdk_pid62974 00:55:14.676 Removing: /var/run/dpdk/spdk_pid63022 00:55:14.676 Removing: /var/run/dpdk/spdk_pid63532 00:55:14.676 Removing: /var/run/dpdk/spdk_pid63637 00:55:14.676 Removing: /var/run/dpdk/spdk_pid63752 00:55:14.676 Removing: /var/run/dpdk/spdk_pid63810 00:55:14.676 Removing: /var/run/dpdk/spdk_pid63837 00:55:14.676 Removing: /var/run/dpdk/spdk_pid63868 00:55:14.676 Removing: /var/run/dpdk/spdk_pid65770 00:55:14.676 Removing: /var/run/dpdk/spdk_pid65918 00:55:14.676 Removing: /var/run/dpdk/spdk_pid65922 00:55:14.676 Removing: /var/run/dpdk/spdk_pid65945 00:55:14.676 Removing: /var/run/dpdk/spdk_pid65988 00:55:14.676 Removing: /var/run/dpdk/spdk_pid65992 00:55:14.676 Removing: /var/run/dpdk/spdk_pid66004 00:55:14.676 Removing: /var/run/dpdk/spdk_pid66054 00:55:14.676 Removing: /var/run/dpdk/spdk_pid66058 00:55:14.676 Removing: /var/run/dpdk/spdk_pid66070 00:55:14.676 Removing: /var/run/dpdk/spdk_pid66115 00:55:14.676 Removing: /var/run/dpdk/spdk_pid66119 00:55:14.676 Removing: /var/run/dpdk/spdk_pid66141 00:55:14.676 Removing: /var/run/dpdk/spdk_pid67563 00:55:14.676 Removing: /var/run/dpdk/spdk_pid67678 00:55:14.676 Removing: /var/run/dpdk/spdk_pid69103 00:55:14.676 Removing: /var/run/dpdk/spdk_pid70859 00:55:14.676 Removing: /var/run/dpdk/spdk_pid70944 00:55:14.676 Removing: /var/run/dpdk/spdk_pid71026 00:55:14.676 Removing: /var/run/dpdk/spdk_pid71149 00:55:14.676 Removing: /var/run/dpdk/spdk_pid71242 00:55:14.676 Removing: /var/run/dpdk/spdk_pid71342 00:55:14.676 Removing: /var/run/dpdk/spdk_pid71433 00:55:14.676 Removing: /var/run/dpdk/spdk_pid71508 00:55:14.676 Removing: /var/run/dpdk/spdk_pid71618 00:55:14.934 Removing: /var/run/dpdk/spdk_pid71721 00:55:14.934 Removing: /var/run/dpdk/spdk_pid71828 00:55:14.934 Removing: /var/run/dpdk/spdk_pid71914 00:55:14.934 Removing: /var/run/dpdk/spdk_pid71996 00:55:14.934 Removing: /var/run/dpdk/spdk_pid72106 00:55:14.934 Removing: /var/run/dpdk/spdk_pid72203 00:55:14.934 Removing: /var/run/dpdk/spdk_pid72306 00:55:14.934 Removing: /var/run/dpdk/spdk_pid72391 00:55:14.934 Removing: /var/run/dpdk/spdk_pid72475 00:55:14.934 Removing: /var/run/dpdk/spdk_pid72586 00:55:14.934 Removing: /var/run/dpdk/spdk_pid72685 00:55:14.934 Removing: /var/run/dpdk/spdk_pid72796 00:55:14.934 Removing: /var/run/dpdk/spdk_pid72877 00:55:14.934 Removing: /var/run/dpdk/spdk_pid72952 00:55:14.934 Removing: /var/run/dpdk/spdk_pid73032 00:55:14.934 Removing: /var/run/dpdk/spdk_pid73112 00:55:14.934 Removing: /var/run/dpdk/spdk_pid73226 00:55:14.934 Removing: /var/run/dpdk/spdk_pid73318 00:55:14.934 Removing: /var/run/dpdk/spdk_pid73423 00:55:14.934 Removing: /var/run/dpdk/spdk_pid73504 00:55:14.934 Removing: /var/run/dpdk/spdk_pid73585 00:55:14.934 Removing: /var/run/dpdk/spdk_pid73665 00:55:14.934 Removing: /var/run/dpdk/spdk_pid73750 00:55:14.934 Removing: /var/run/dpdk/spdk_pid73859 00:55:14.934 Removing: /var/run/dpdk/spdk_pid73951 00:55:14.934 Removing: /var/run/dpdk/spdk_pid74107 00:55:14.934 Removing: /var/run/dpdk/spdk_pid74398 00:55:14.934 Removing: /var/run/dpdk/spdk_pid74439 00:55:14.934 Removing: /var/run/dpdk/spdk_pid74919 00:55:14.934 Removing: /var/run/dpdk/spdk_pid75111 00:55:14.934 Removing: /var/run/dpdk/spdk_pid75220 00:55:14.934 Removing: /var/run/dpdk/spdk_pid75336 00:55:14.934 Removing: /var/run/dpdk/spdk_pid75413 00:55:14.934 Removing: /var/run/dpdk/spdk_pid75438 00:55:14.934 Removing: /var/run/dpdk/spdk_pid75744 00:55:14.934 Removing: /var/run/dpdk/spdk_pid75814 00:55:14.934 Removing: /var/run/dpdk/spdk_pid75902 00:55:14.934 Removing: /var/run/dpdk/spdk_pid76325 00:55:14.934 Removing: /var/run/dpdk/spdk_pid76477 00:55:14.934 Removing: /var/run/dpdk/spdk_pid77273 00:55:14.934 Removing: /var/run/dpdk/spdk_pid77427 00:55:14.934 Removing: /var/run/dpdk/spdk_pid77625 00:55:14.934 Removing: /var/run/dpdk/spdk_pid77738 00:55:14.934 Removing: /var/run/dpdk/spdk_pid78086 00:55:14.934 Removing: /var/run/dpdk/spdk_pid78352 00:55:14.934 Removing: /var/run/dpdk/spdk_pid78727 00:55:14.934 Removing: /var/run/dpdk/spdk_pid78947 00:55:14.934 Removing: /var/run/dpdk/spdk_pid79073 00:55:14.934 Removing: /var/run/dpdk/spdk_pid79147 00:55:14.934 Removing: /var/run/dpdk/spdk_pid79281 00:55:14.934 Removing: /var/run/dpdk/spdk_pid79317 00:55:14.934 Removing: /var/run/dpdk/spdk_pid79381 00:55:14.934 Removing: /var/run/dpdk/spdk_pid79587 00:55:14.934 Removing: /var/run/dpdk/spdk_pid79841 00:55:14.934 Removing: /var/run/dpdk/spdk_pid80248 00:55:14.934 Removing: /var/run/dpdk/spdk_pid80665 00:55:14.934 Removing: /var/run/dpdk/spdk_pid81101 00:55:14.934 Removing: /var/run/dpdk/spdk_pid81571 00:55:14.934 Removing: /var/run/dpdk/spdk_pid81723 00:55:14.934 Removing: /var/run/dpdk/spdk_pid81821 00:55:14.934 Removing: /var/run/dpdk/spdk_pid82420 00:55:14.934 Removing: /var/run/dpdk/spdk_pid82502 00:55:14.934 Removing: /var/run/dpdk/spdk_pid82877 00:55:14.934 Removing: /var/run/dpdk/spdk_pid83221 00:55:14.934 Removing: /var/run/dpdk/spdk_pid83658 00:55:14.934 Removing: /var/run/dpdk/spdk_pid83786 00:55:14.934 Removing: /var/run/dpdk/spdk_pid83852 00:55:14.934 Removing: /var/run/dpdk/spdk_pid83927 00:55:14.934 Removing: /var/run/dpdk/spdk_pid83996 00:55:14.934 Removing: /var/run/dpdk/spdk_pid84067 00:55:14.934 Removing: /var/run/dpdk/spdk_pid84295 00:55:14.934 Removing: /var/run/dpdk/spdk_pid84398 00:55:14.934 Removing: /var/run/dpdk/spdk_pid84478 00:55:14.934 Removing: /var/run/dpdk/spdk_pid84561 00:55:14.934 Removing: /var/run/dpdk/spdk_pid84596 00:55:14.934 Removing: /var/run/dpdk/spdk_pid84680 00:55:14.934 Removing: /var/run/dpdk/spdk_pid84822 00:55:14.934 Clean 00:55:15.193 14:16:12 -- common/autotest_common.sh@1453 -- # return 0 00:55:15.194 14:16:12 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:55:15.194 14:16:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:15.194 14:16:12 -- common/autotest_common.sh@10 -- # set +x 00:55:15.194 14:16:12 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:55:15.194 14:16:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:15.194 14:16:12 -- common/autotest_common.sh@10 -- # set +x 00:55:15.194 14:16:12 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:55:15.194 14:16:12 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:55:15.194 14:16:12 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:55:15.194 14:16:12 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:55:15.194 14:16:12 -- spdk/autotest.sh@398 -- # hostname 00:55:15.194 14:16:12 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:55:15.452 geninfo: WARNING: invalid characters removed from testname! 00:55:47.536 14:16:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:55:48.912 14:16:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:55:52.247 14:16:48 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:55:55.530 14:16:52 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:55:58.062 14:16:55 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:56:00.618 14:16:57 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:56:03.149 14:17:00 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:56:03.149 14:17:00 -- spdk/autorun.sh@1 -- $ timing_finish 00:56:03.149 14:17:00 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:56:03.149 14:17:00 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:56:03.149 14:17:00 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:56:03.149 14:17:00 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:56:03.149 + [[ -n 5291 ]] 00:56:03.149 + sudo kill 5291 00:56:03.157 [Pipeline] } 00:56:03.174 [Pipeline] // timeout 00:56:03.179 [Pipeline] } 00:56:03.195 [Pipeline] // stage 00:56:03.200 [Pipeline] } 00:56:03.214 [Pipeline] // catchError 00:56:03.223 [Pipeline] stage 00:56:03.225 [Pipeline] { (Stop VM) 00:56:03.237 [Pipeline] sh 00:56:03.517 + vagrant halt 00:56:08.778 ==> default: Halting domain... 00:56:15.347 [Pipeline] sh 00:56:15.621 + vagrant destroy -f 00:56:19.853 ==> default: Removing domain... 00:56:19.865 [Pipeline] sh 00:56:20.149 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:56:20.158 [Pipeline] } 00:56:20.175 [Pipeline] // stage 00:56:20.181 [Pipeline] } 00:56:20.196 [Pipeline] // dir 00:56:20.202 [Pipeline] } 00:56:20.216 [Pipeline] // wrap 00:56:20.223 [Pipeline] } 00:56:20.236 [Pipeline] // catchError 00:56:20.246 [Pipeline] stage 00:56:20.248 [Pipeline] { (Epilogue) 00:56:20.261 [Pipeline] sh 00:56:20.542 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:56:28.764 [Pipeline] catchError 00:56:28.767 [Pipeline] { 00:56:28.782 [Pipeline] sh 00:56:29.064 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:56:29.321 Artifacts sizes are good 00:56:29.330 [Pipeline] } 00:56:29.344 [Pipeline] // catchError 00:56:29.356 [Pipeline] archiveArtifacts 00:56:29.364 Archiving artifacts 00:56:29.518 [Pipeline] cleanWs 00:56:29.533 [WS-CLEANUP] Deleting project workspace... 00:56:29.533 [WS-CLEANUP] Deferred wipeout is used... 00:56:29.539 [WS-CLEANUP] done 00:56:29.541 [Pipeline] } 00:56:29.558 [Pipeline] // stage 00:56:29.565 [Pipeline] } 00:56:29.580 [Pipeline] // node 00:56:29.585 [Pipeline] End of Pipeline 00:56:29.626 Finished: SUCCESS